frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI is destroying Open Source, and it's not even good yet

https://www.jeffgeerling.com/blog/2026/ai-is-destroying-open-source/
161•VorpalWay•2h ago

Comments

VorpalWay•2h ago
This one is probably going to be controversial. But I feel highlighting the drawbacks are also important, not just the benefits.
stickynotememo•1h ago
It's quite unfortunate that this has to be controversial.
jazz9k•2h ago
"If this was a problem already, OpenClaw's release, and this hiring by OpenAI to democratize agentic AI further, will only make it worse. Right now the AI craze feels the same as the crypto and NFT boom, with the same signs of insane behavior and reckless optimism."

There are definitely people abusing AI and lying about what it can actually do. However, Crypto and NFTs are pretty much useless. Many people (including me) have already increased productivity using LLMs.

This technology just isn't going away.

ggm•2h ago
The underlying tech in a signed public ledger, using chaining methods and merkle trees to record things with non-repudiation, thats useful. I went to a meeting which included people from the Reserve Bank of Australia or the financial regulator, and they said that between nation states, settlement was about mutuality, and absent a regulator to tell you what to do, federated processes around things like this were entirely rational choices. Nothing whatsoever about Bitcoin, Ethereum, the hype machine, rug pulls, just the underlying tech using normal PKI and some data structures and HSM backed processes. The regulator said informally, in a regulated monopoly agreeing to use mutual (dis)trust methods like a chain would be acceptable as an audit method to the regulator. Nothing about you or me, nothing about hype. Mechanistic settlement methods amongst competitors in a reasonably transparent manner.

Some payment chains are painful. An awful lot of middlemen take a cut. Some payment chains impose burdens on the endpoint like 90 day settlement debts which could be avoided with some use of tech. Nothing about the hype, just modification to financial transactions, but they could be done other ways as well (as could the settlement ideas above)

NFT are the same logic as bearer bonds. They're useful to very specific situations of value transfer, and almost nothing else. The use isn't about the artwork on the front, its the posession of a statement of value. Like bonds, they get discounted. The value is therefore a function of the yield and the trust in the chain of sigs asserting its a debt to that value. Not identical, but the concept stripped of the ego element isn't that far off.

Please note I think bored ape and coins are stupid. I am not attempting to promote the hype.

AI is the same. LLMs are useful. There are functional tools in this. The sheer amount of capital being sunk into venture plays is however, disconnected from that utility.

Terr_•1h ago
Half-agree: "Blockchain" systems contain new and useful technology, but the useful technology is not new, and the new technology is not-so-useful. If we keep the useful stuff, we're basically back at regular old distributed databases.

The key blockchain requirement is allowing unrestricted node membership. From that flows a dramatic explosion of security issues, performance issues, and N-level deep workarounds.

In the case of a bunch of banks trying to keep each other honest, it's drastically simpler/faster/cheaper to allocate a certain number of fixed nodes to be run by different participants and trusted outside institutions.

One doesn't need to trust every node, just the a majority is unlikely to be suborned, and you'll know in advance which majorities are possible. The bank in Australia probably doesn't want or need to shift some of that responsibility outside the group, onto literally anybody who shows up with some computing power.

ggm•1h ago
That's fair.
akoboldfrying•1h ago
An analogy for cryptocurrency that I like is lasers. I remember reading an Usborne book about lasers as a kid and thinking they were the coolest thing ever and would doubtless find their way into every technology because glowing beams of pure light energy, how could they not transform the world!

Lasers turn out to be useful for... eye surgery, and pointing at things, and reading bits off plastic discs, and probably a handful of other niche things. There just aren't that many places where what they can do is actually needed, or better than other more pedestrian ways of accomplishing the same thing. I think the same is true of crypto, including NFTs.

verdverm•1h ago
You should check out HN /new and /show the last couple of weeks.

It's just like all the ICO, NFT, and other crypto launches, but for all the little things that you can do with Ai. Everybody or their bot has some new game changing Ai project. It's a tiring mess right now, which I do hope will similarly die down in time

For clarity, I was a big fan of blockchain before it got bad, still am for things like ZKP and proof-of-authority, and I am similarly very excited for what Ai enables, but (imo) one cannot easily argue there is not a spam problem that feels similar.

BoneShard•9m ago
Check LinkedIn, it's like HN times 100.
kace91•1h ago
The tech isn’t going away, but its usability is probably going to be recalibrated once we factor in the long term danger ( effects on learning and acquiring/maintaining skills, maintenance costs of ai made code, etc).

We’ll still have the “best code tooling ever invented” stuff, but if the market is assuming “intellectual workers all replaced”, there’s still a bubble pop waiting for us.

shimman•1h ago
I don't like the weasel word "democratize" because there is nothing democratic about being forced to use a tool on condition of keeping your job. Democratization goes both ways, if you can't destroy something you cannot truly control it; I'm sure if you put it to an actual vote, many people would be surprised at the results.
bsza•1h ago
It doesn't have to go away, it just needs to be better regulated. I could also increase my productivity by taking Adderall, if that was my end goal. But most people don't, since there are other factors to take into consideration, like becoming unable to function without it, or long-term cognitive decline...
keernan•2h ago
>>... Crypto ... are [is] pretty much useless.

Other than by corrupt criminals and mafia types who have a need to covertly hide cash.

And then the current administration wants the government to 'protect' crypto investors against big losses. Gotta love it.

henry2023•1h ago
>> Other than by corrupt criminals and mafia types who have a need to covertly hide cash.

I’ve got an Argentinian friend who sends crypto to his mother because he pays less than 0.5 % in fees and exchange rates instead of close to 5% using the traditional way. From now on I’ll call him a corrupt criminal.

keernan•1h ago
No need to be snarky. I didn't realize there actually were any legitimate reasons to own crypto.
nyc_data_geek1•1h ago
And anyone who is under sanction or lives in a nation under economic sanction, and wants access to a means of sending payments across borders that would otherwise be closed to them.

And anyone who lives in a polity whose local currency may be undergoing rapid devaluation/inflation.

And anyone who needs a form of wealth that no local authority is technically capable of alienating them from - ie: if you need to pack everything in a steamer trunk to escape being herded into cattle cars, you can memorize a seed phrase and no one can stop you from taking your wealth with you.

And any polity who may no longer wish to use dollars as the international lingua franca of trade, as the global foreign exchange reserve currency, to reduce the degree to which their forex reserves prop up American empire.

Sadly, all of these use cases appear increasingly relevant as time goes on.

keernan•1h ago
ok - I am willing to be educated. Thank you.
the-anarchist•1h ago
> corrupt criminals and mafia types who have a need to covertly hide cash

You're describing the people that use actual cash to launder and hide, well, cash, and that have done so for centuries, long before crypto had even been invented.

A few web searches on <big bank name> + "money laundering scandal" (e.g. "HSBC money laundering scandal") can offer valuable insights.

keernan•1h ago
>> that have done so for centuries

There is no doubt crypto processes trillions of dollars of illegal cash. Way easier for the illegal cash industry to wash their cash than ever before.

mulmen•52m ago
How does crypto make money laundering at scale easy?
nblgbg•1h ago
Isn’t it also destroying the internet with low-quality content and affecting content creation in general? Can LLMs still rely on data from the open internet for training?
bmurphy1976•1h ago
I'm going to take issue with AI destroying the internet. Our short attention span profit driven culture was already well on it's way to trashing everything that was good. AI is only accelerating the inevitable.
slopinthebag•1h ago
Ya but that's like saying we were going 10kmh, it's nbd that we accelerate to 1000kmh since we were gonna hit the wall anyways
_heimdall•41m ago
This is exactly how we collectively "solve" so many problems today though, its far from unique to this topic.

We over medicate people, especially the elderly, because each new med has side effects and they're dying eventually anyway. We print more and more debt to paper over massive budget surpluses because the unspoken reality is that we're financially screwed either way. We pile more and more regulations on because we'd rather further grow the government and kick the can a few more times. We bolt one new emissions system after another on our diesel engines because they're already unreliable, who cares.

We don't consider how we got here, only what the next step we take should be. And don't even ask where a step should be taken, progress requires changing things constantly and we rarely give ourselves time to look back and retrace our steps.

fyredge•10m ago
Your examples are not supporting your premise. Over medication is from all the attempts to fix all the various medical conditions found. Adding regulations are to fix all the problems of people finding new ways to abuse the system.

This is entirely opposite from accelerationism, which would advocate for less medication so that sick people die quicker, and less regulation so that society would be exploited faster and collapse faster.

bmurphy1976•8m ago
Well, then our disagreement is that I feel we were already going at 1000km/h. Nowhere did I say we should keep doing this or it was a good thing or we should ignore it. My point is simple: we already needed stop a long time ago.

Let me re-use your analogy. We were already driving off a cliff, and we are trying to blame the fact that we're pushing on the gas and accelerating however we're ignoring that we were already heading that way and brake lines were cut.

add-sub-mul-div•58m ago
This is the same stupid reasoning that told us Trump would be a good outcome because the system was imperfect and ruining it fully would magically create a better one.
bmurphy1976•14m ago
What the hell?

I didn't say this was a good thing, I only said things were already fucked. And Trump is also a symptom of a deeper rot in our system. He just happens to be the asshole who took advantage of it.

If you don't fix the deeper issues, it doesn't matter what's going to happen. Blaming AI is blaming a symptom, not the cause.

Stating that we need to fix the deeper problem isn't even close to the same thing as whatever this nonsense is you responded with.

api•56m ago
Beat me to it. Facebook/Meta, Twitter/X, Google/YouTube, and TikTok have done quite a bit more damage to the Internet than AI.

The future of the net was closed gated communities long before AI came along. At worst it’s maybe the last nail in the coffin. But the coffin lid was already on and the man inside was already dead.

AI is, I think, more mixed. It is creating more spam and noise, but AI itself is also fascinating to play with. It’s a genuine innovation and playing with it sometimes makes me feel the way I did first exploring the web.

mmooss•46m ago
Agreed: The Internet has long been up-to-your-eyeballs with low quality content (i.e., bullsh-t). Blaming LLM software for it is ignoring the well-known reality of just a year or two ago.
stickynotememo•1h ago
So what's the alternative? Should we go back to reading encyclopedias from the 2010s? I ask this because the need for information hasn't decreased for human beings, just because the capability to produce slop has suddenly increased.
skeeter2020•36m ago
>> I ask this because the need for information hasn't decreased for human beings, just because the capability to produce slop has suddenly increased.

Isn't that the complaint to which you're responding? the SUPPLY side of the equation is the problem, so reading encyclopedias wouldn't impact that. Funny enough the criticism of Wikipedia was that a bunch of amateurs couldn't beat the quality from a small group of experts curating a controlled collection, and we saw that wasn't true. Maybe AI has pushed this to a new level where we need to tighten access and attention once again?

fullshark•56m ago
The Economics of content platforms already started destroying the internet. A lot of the reason the internet was so good for a long time was faith by creators that good content would win, that turned out to be false.
snarfy•38m ago
It doesn't have to be low quality. It really is another tool like any other. You can put low effort in and get working results. This low effort, working result gets shipped immediately and gives the whole process a bad wrap. The source is generated crap that lacks craftsmanship and quality. But this gets AI dismissed when it shouldn't be. You can get quality, well crafted source code if you make that a goal and keep iterating.
truncate•1h ago
Three patterns I've noticed on the open-source projects I've worked on:

1. AI slop PRs (sometimes giant). Author responds to feedback with LLM generated responses. Show little evidence they actually gave any thought of their own towards design decisions or implementation.

2. (1) often leads me to believe they probably haven't tested it properly or thought of edge cases. As reviewer you now have to be extra careful about it (or just reject it).

3. Rise in students looking for job/internship. The expectation is that LLM generated code which is untested will give them positive points as they have dug into the codebase now. (I've had cases where they said they haven't tested the code, but it should "just work").

4. People are now even more lazy to cleanup code.

Unfortunately, all of these issues come from humans. LLMs are fantastic tools and as almost everyone would agree they are incredibly useful when used appropriately.

Nition•1h ago
> Unfortunately, all of these issues come from humans.

I've been thinking about this recently. As annoying as all the bots on Twitter and Reddit are, it's not bots spinning up bots (yet!), it's other humans doing this to us.

TOMDM•1h ago
> it's not bots spinning up bots (yet!)

Well, some of them are, but the bots bot is spun up by a human (or maybe bot n+1)

Nition•1h ago
Great bots have little bots, if one should deign to write 'em

And little bots have lesser bots, and so ad infinitum...

kerkeslager•1h ago
I've got a few open source projects out there, and I've almost never received any PRs for them until AI, simply because they were things I did for myself and never really promoted to anyone else. But now I'm getting obviously-AI PRs on a regular basis. Somehow people are using AI to find my unpromoted stuff and submit PRs to it.

My canned response now is to respond, "Can you link me to the documentation you're using for this?" It works like a charm, the clanker doesn't ever respond.

MBCook•56m ago
> Unfortunately, all of these issues come from humans.

They are. They’ve always been there.

The problem is that LLMs are a MASSIVE force multiplier. That’s why they’re a problem all over the place.

We had something of a mechanism to gate the amount of trash on the internet: human availability. That no longer applies. SPAM, in the non-commercial sense of just noise that drowns out everything else, can now be generated thousands of times faster than real content ever could be. By a single individual.

It’s the same problem with open source. There was a limit to the number of people who knew how to program enough to make a PR, even if it was a terrible one. It took time to learn.

AI automated that. Now everyone can make massive piles of complicated plausible looking PRs as fast as they want.

To whatever degree AI has helped maintainers, it is not nearly as an effective a tool at helping them as it is helping others generate things to waste their time. Intentionally or otherwise.

You can’t just argue that AI can be a benefit therefore everything is fine. The externalities of it, in the digital world, are destroying things. And even if we develop mechanisms to handle the incredible volume will we have much of value left by the time we get there?

This is the reason I get so angry at every pro AI post I see. They never seem to discuss the possible downsides of what they’re doing. How it affects the whole instead of just the individual.

There are a lot of people dealing with those consequences today. This video/article is an example of it.

_--__--__•39m ago
If only I were lucky enough to get LLM generated responses, usually a question like "Did you consider if X would also solve this problem?" results in a flurry of force pushed commits that overwrite history to do X but also 7 other unrelated things that work around minor snags the LLM hit doing X.
pvillano•1h ago
AI training is information theft. AI slop is information pollution.
pvillano•1h ago
Search feels like fishing in an ocean of floating plastic.

Social media feels like parks smothered with smog.

It makes you stupid like leaded gas.

We'll probably be stuck with it forever, like PFAS

dtnewman•1h ago
Open Source isn't going anywhere. Open Contribution might be on the way out. I built an open source command line tool (https://github.com/dtnewman/zev) that went very minorly viral for a few days last year.

What I found in the following week is a pattern of:

1) People reaching out with feature requests (useful) 2) People submitting minor patches that take up a few lines of code (useful) 3) People submitting larger PRs, that were mostly garbage

#1 above isn't going anywhere. #2 is helpful, especially since these are easy to check over. For #3, MOST of what people submitted wasn't AI slop per se, but just wasn't well thought out, or of poor quality. Or a feature that I just didn't want in the product. In most cases, I'd rather have a #1 and just implement it myself in the way that I want to code organized, rather than someone submitting a PR with poorly written code. What I found is that when I engaged with people in this group, I'd see them post on LinkedIn or X the next day bragging about how they contributed to a cool new open-source project. For me, the maintainer, it was just annoying, and I wasn't putting this project out there to gain the opportunity to mentor junior devs.

In general, I like the SQLite philosophy of we are open source, not open contribution. They are very explicit about this, but it's important for anyone putting out an open source project that you have ZERO obligation to accept any code or feature requests. None.

aethertap•12m ago
This comment really hit me - I have a few things I've worked on but never released, and I didn't even realize it was basically because I don't want to deal with all of that extra stuff. Maybe I'll release them with this philosophy.
loeber•1h ago
This is a deeply pessimistic take, and I think it's totally incorrect. While I believe that the traditional open source model is going to change, it's probably going to get better than ever.

AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.

For more detail, I have written my thoughts on my blog just the other day: https://essays.johnloeber.com/p/31-open-source-software-in-t...

lovich•1h ago
Why would people/companies donate more money to open source in the future that they don’t already donate today?

It’s a tragedy of the commons problem. Most of the money available is not tied up to decision makers who are ideologically aligned with open source, so I don’t see why they’d donate any more in the future.

They usually do so because they are critically reliant on a library that’s going to die, think it’s good PR, makes engineers happy(don’t think they care about that anymore), or they think they can gain control of some aspect of industry(looking at you futurewei and the corporate workers of the Rust project)

loeber•1h ago
Because donating to open source projects today has an extremely unclear payoff. For example, I donate to KDE, which is my favorite Linux desktop environment. However, this does not have a measurable impact on my day-to-day usage of KDE. It's very abstract in that I'm making a tiny, opaque contribution to its development, but I have no influence on what gets developed.

More concretely, there are many features that I'd love to see in KDE which don't currently exist. It would be amazing if I could just donate $10, $20, $50 and submit a ticket for a maintainer to consider implementing the feature. If they agree that it's a feature worth having, then my donation easily covers running AI for an hour to get it done. And then I'd be able to use that feature a few days later.

sarchertech•1h ago
1. You can already do that it just costs more than $10.

2. Even assuming the AI can crap out the entire feature unassisted, in a large open source code base the maintainer is gonna to spend a sizeable fraction of the time reviewing and testing the feature as they would have coding it. You’re now back to 1.

Conceivably it might make it a little cheaper, but not anywhere close to the kind of money you’re talking about.

Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.

lovich•57m ago
Yea, that’s the ideologically not aligned part I referenced.

If AI can make features without humans why would I, as a profit maximizing organization, donate that resource instead of keeping it in house? If we’re not gonna have human eyes on it then we’re not getting more secure, I don’t really think positive PR would exist for that, and it would deny competitors resources you now have that they don’t.

saimiam•51m ago
> Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.

The comment you responded to is (presumably) talking about the transition phase where LLMs can help implement but not fully deliver a feature and need human oversight.

If there are reasonably good devs in low CoL areas who can coax a new feature or bug fix for an open source project out of an LLM for $50, i think it’s worth trialling as a business model.

matteotom•1h ago
Funding for open source projects has been a problem for about as long as open source projects have existed. I'm not sure I follow why you think specifying donations will go towards LLM tokens will suddenly open the floodgates.
loeber•1h ago
If you don't get it, then you should read the blog post and come back if you have questions.
jscd•1h ago
Wow, impressively insufferable
matteotom•1h ago
I did. Your argument seems to be that LLMs allow users who want specific features to direct a donation specifically towards the (token) costs of developing that feature. But I don't see how that's any different from just offering to pay someone to implement the feature you want. In fact, this does happen, eg in the case of companies hiring Linux devs; but it hasn't worked as a general purpose OSS-funding mechanism.
avaer•1h ago
But locally, dollars are a zero-sum game. Your dollars came from someone else. If you make a project better for yourself without making it better for others you can possibly one-up others and make more dollars with it. If you make it better for everyone that's not necessarily the case. You're just diluting your money and soon enough you won't have money and you're eliminated from the race.

While I'd like to believe in the decency and generosity of humans, I don't get the economic case of donating money to the agent behind an OS project, when the person could spend the money on the tokens locally themselves and reap the exclusive reward. If it really is just about money that only makes sense.

Obviously this is a gross oversimplification, but I don't think you can ignore the rational economics of this, since in capitalism your dollars are earned through competition.

xyzzy123•56m ago
Would be cool if you could donate to maintainer's favourite bot to get bugs fixed.

Usually, getting stuff fixed on main is better than being forced to maintain a private fork.

kerkeslager•1h ago
> AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours.

I think this is true, but misses the point: quantity of code contributions is absolutely useless without quality. You're correct that OSS programmer hours are the most scarce asset OSS has, but AI absolutely makes this scarce resource even more scarce by wasting OSS programmers' time sifting through clanker slop.

There literally isn't an upside. The code produced by AI simply isn't good enough consistently enough.

That's setting aside the ethical issues of stealing other people's work and spewing even more carbon into the atmosphere.

Ygg2•1h ago
Great.

Give money to maintainers? No.

Give money to bury maintainers in AI Slop? Yes.

abrookewood•1h ago
There are a few valid arguments that I see to support the pessimism:

1. When people use LLMs to code, they never read the docs (why would they), so they miss the fact that the open source library may have a paid version or extension. This means that open source maintainers will receive less revenue and may not be able to sustain their open source libraries as a result. This is essentially what the Tailwind devs mentioned.

2. Bug bounties have encouraged people to submit crap, which wastes maintainers time and may lead them to close pull requests. If they do the latter, then they won't get any outside help (or at least, they will get less). Even if they don't do that, they now have a higher burden than previously.

voxl•1h ago
Open source will ban AI, I'd bet $100 that AI will get banned more and more often entirely from large OSS
mythrwy•9m ago
How will they know who wrote the code?
Snakes3727•46m ago
Hi I just wanted to let you know your article screams like it was written by AI as you fail to go into any real explanation for anything.

I can summarize your entire essay as frankly:

"We can give maintainers of OSS projects money to maintain projects" revolutionary never been done before. /S

invalidname•18m ago
As a maintainer of a medium size OSS project I agree. We've been running the produce for over a decade and a few years back Google came out with a competitor that pretty much sucked the air out of our field. It didn't matter that our product was better, we didn't have the resources to compete with a google hobby project.

As a result our work on the project got reduced to maintenance until coding agents got better. Over the past year I've rewritten a spectacular amount of the code using AI agents. More importantly, I was able to construct enterprise level testing which was a herculean task I just couldn't take up on my own.

The way I see it, AI brought back my OSS project that was heading to purgatory.

EDIT: Also about OPs post. It's really f*ing bug bounties that are the problem. These things are horrible and should die in fire...

xtreak29•1h ago
Reviewing code was also a big bottleneck. With lot more untested code where authors don't care about reviewing their own code it will take even more toll on open source maintainers. Code quality between side projects and open source projects are different. Ensuring good code quality enables long term maintenance for open source projects that have to support the feature through the years as a compatibility promise.
sodapopcan•50m ago
That's where pair programming came in but it turns out that most people hate each other so much that they'd rather work with a machine pretending to be a person.

I realize there are many levels to this claim but I'm not being sarcastic at all here.

michelsedgh•1h ago
I think I have seen more open source projects get released since LLMs came out and the rate seems to be increasing. The cost of making software and open sourcing it has gone down a lot. We see some slop but as the models get better, the quality will get better and from the pace I have seen we went from gpt-3.5 to now opus4.6 i dont think it will be long before the LLMs get much better than humans in coding!
tayo42•1h ago
Llms are already better then most people at coding for typical tasks imo.

I finally got around to Claude code and the code it generates and the debugging it does is pretty good.

Inb4 some random accuses me of being an idiot or shit engineer lol

michelsedgh•32m ago
Couldn’t agree more, people forget most software out there has generally shitty code anyways. Also this is the worst the llms will be and they will only get better as time goes on…
PaulDavisThe1st•1h ago
From my POV (30 or so years working on the same FLOSS project), AI isn't "destroying Open Source" through an effect on contributions. It is, however, destroying open source through its ceaseless, relentless, unabatable crawling of open source git repositories, commit by commit rather than via git-clone(1).

Project after project reports wasted time, increased hosting/bandwidth bills, and all around general annoyance from this UTTER BULLSHIT. But every morning, we wake up, and its still there, no sign of it ever stopping.

zer00eyz•1h ago
The house is poorly put together cause the carpenter used a cheap nail gun and a crappy saw.

LLMs are confidently wrong and make bad engineers think they are good ones. See: https://en.wikipedia.org/wiki/Dunning–Kruger_effect

If you're a skilled dev, in an "common" domain, an LLM can be an amazing tool when you integrate it into your work flow and play "code tennis" with it. It can change the calculus on "one offs", "minor tools and utils" and "small automations" that in the past you could never justify writing.

Im not a Lawyer, or a Doctor. I would never take legal advice or medical advice from an LLM. Im happy to work with the tool on code because I know that domain, because I can work with it, and take over when it goes off the rails.

bobpaw•1h ago
It is hard to test LLM legal/medical advice without risk of harm, but it is often exceedingly easy to test LLM generated code. The most aggravating thing to me is that people just don't. I think the best thing we can do is to encourage everyone who uses/trusts LLMs to test and verify more often.
mifydev•1h ago
Frankly, I don't like this kinds of takes. Yes, people are seeing more spam in their pull requests, but that's just what it is - spam that you need to learn how to filter. For regular engineers who can use AI, it's a blessing.

I'm a long time linux user - now I have more time to debug issues, submit them, and even do pull requests that I considered too time consuming in the past. I want and I can now spend more time on debugging Firefox issues that I see, instead of just dropping it.

I'm still learning to use AI well - and I don't want to submit unverified slop. It's my responsibility to provide a good PR. I'm creating my own projects to get the hang of my setup and very soon I can start contributing to existing projects. Maintainers on the other hand need to figure out how to pick good contributors on scale.

sarchertech•56m ago
Well that’s the problem. AI is really good at making things that bypass people’s heuristics for spam.

Someone can spam me with more AI slop than I can vet and it can pass any automated filter I can setup.

The solution is probably closed contributions because figuring out good contributors at scale sounds like figuring out how to hire at scale, which we are horrible at as an industry.

mhitza•1h ago
Hard drives shortage is already old news, CPUs are next.
Veedrac•57m ago
> From what I've seen, models have hit a plateau where code generation is pretty good...

> But it's not improving like it did the past few years.

As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?

martinald•53m ago
Yes a strange comment. Opus 4.5 is significantly better than before and Opus 4.6 is even better. Same with the 5.2 and 5.3 Codex models.

If anything, the pace has increased.

This may be one of the most important graphs to keep an eye on: https://metr.org/ and it tracks well to my anecdotal experience.

You can see the industry did hit a bit of a wall in 2024 where the improvements drop below the log trend. However, in 2025 the industry is significantly _above_ the trend line.

Aurornis•34m ago
I see these claims in a lot of anti-LLM content, but I’m equally puzzled. The pace of progress feels very fast right now.

There is some desire to downplay or dismiss it all, as if the naysayers are going to get their “told you so” moment and it’s just around the corner. Yet the goalposts for that moment just keep moving with each new release.

It’s sad that this has turned into a culture war where you’re supposed to pick a side and then blind yourself to any evidence that doesn’t support your chosen side. The vibecoding maximalists do the same thing on the other side of this war, but it’s getting old on both sides.

mcny•55m ago
I feel like we are talking past each other.

1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.

2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."

LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.

I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.

If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?

sheepscreek•13m ago
The real problem is that OSS projects do not have enough humans to manually review every PR.

Even if they were willing to deploy agents for initial PR reviews, it would be a costly affair and most OSS projects won’t have that money.

0xbadcafebee•54m ago
Remember when projects were getting overwhelmed by PRs from students just editing a line in a README so they could win a t-shirt? That was 2020, and they weren't using AI. The open source community has been going downhill for a while. The new generation isn't getting mentored by the old generation, so stable, old-fogey methods established by Linux distributions are eschewed by the new kids. Technology advancement has made open source interactions a little too easy, and unnecessarily fragile. Some ecosystems focus way too much on crappy reusable components, and don't focus enough on supply chain security.

Here's the good news: AI cannot destroy open source. As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive. It wouldn't be a bad thing for us to standardize open source a bit more, like templates for contributors' guides, automation to help troubleshoot bug reports, and training for new maintainers (to help them understand they have choices and don't need to give up their life to maintain a small project). And it's fine to disable PRs and issues. You don't have to use GitHub, or any service at all.

charcircuit•32m ago
>As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive.

You don't even need somebody. AI agents themselves can make and share projects.

skeeter2020•32m ago
I get your core point, but the reality is it CAN destroy the ecosystem around OSS upon which it heavily relies: discoverability and community. I don't think you're accurately representing just how much noise and confusion AI slop creates. When it comes to using github it's not because it is an amazing application, but because that's were the people are.
anilgulecha•54m ago
Prior to LLM the concept of "Open Source" could co-exist with "Free Software" - one was a more pragmatic view of how to develop software, the other a political activist position of how code powering our world should be.

AI has laid bare the difference.

Open Source is significantly impacted. Business models based on it are affected. And those who were not taking the political position find that they may not prefer the state of the world.

Free software finds itself, at worst, a bit annoyed (need to figure out the slop problem), and at best, an ally in AI - the amount of free software being built right now for people to use is very high.

tjr•43m ago
I’ve seen different opinions. Can LLM-generated software be licensed under the GPL?
anilgulecha•21m ago
Can you link to them?

The way the world is currently working is code created by someone (using AI) is being dealt with as if it was authors by that someone. This is across companies and FOSS. I think it's going to settle with this pattern.

tibiahurried•54m ago
Internet was a fun place … until they turned into s.. with ads all over. Social media destroyed it.

AI is killing creativity and human collaboration; those long nights spent having pizza and coffee while debugging that stubborn issue or implementing yet another 3D engine… now it is all extremely boring.

GaryBluto•44m ago
> AI is killing creativity and human collaboration; those long nights spent having pizza and coffee while debugging that stubborn issue or implementing yet another 3D engine… now it is all extremely boring.

One could also say Multi-Drug Therapy killed the solidarity and shared struggle found in leper colonies.

Aurornis•38m ago
You can still debug that hobby 3D engine any way you want. Anything you could do 5 years ago you can still do now.

There is an entire new world of people having fun with LLM coding. There are people having fun with social media, too. These people having fun with their thing doesn’t make your thing less fun for you to do.

Let people enjoy things. You can do your own thing and they do theirs. The internet is a big place and there’s room for everyone to find their own way to have fun. If you can’t enjoy your thing because someone else is doing it differently, that’s a you problem.

adithyassekhar•36m ago
Not really when there's economic incentive and when you need to eat.
josephg•13m ago
So you want to have fun coding by hand while also making bank along the way? Yeah, those days seem to be increasingly over.

This is new for us, but it’s not new globally. There used to be professional portrait painters before photography ruined it. Lots of great artists honed their skills and made a living that way. And there were skilled weavers before the loom. Computers (humans who computed things) before the digital computer was invented. And so on. And I’m sure the first photographs don’t look as good as a skilled portrait painting. Arguably they still don’t. But that didn’t save portrait painting as a profession.

We’ll be the same. You can still write code by hand for fun, just like you can paint for fun. I’m currently better at solving problems and writing code than Claude. But Claude is faster than I am, and it’s improving much faster than I am. I think the days of making big money for writing software by hand are mostly over.

murphyslaw•19m ago
Older people like me could say that the Internet was a fun place until AOL came along.

IMO we're going to just have to deal with AI, like it or not.

1970-01-01•50m ago
We all know the solution will be yet another AI agent reviewing the reputation of the pull requests from the public and rating them. This even seems like an easy win for Microsoft and GitHub. Just make it already.
_heimdall•40m ago
Hello, social credit scores.
jongjong•48m ago
My current position is that AI companies should be taxed and the money should be distributed to open source developers.

There is a strong legal basis for this to happen because if you read the MIT license, which is one of the most common and most permissive licenses, it clearly states that the code is made available for any "Person" to use and distribute. An AI agent is not a person so technically it was never given the right to use the code for itself... It was not even given permission to read the copyrighted code, let alone ingest it, modify it and redistribute it. Moreover, it is a requirement of the MIT license that the MIT copyright notice be included in all copies or substantial portions of the software... Which agents are not doing in spite of distributing substantial portions of open source code verbatim, especially when considered in aggregate.

Moreover, the fact that a lot of open source devs have changed their views on open source since AI reinforces the idea that they never consented to their works being consumed, transformed and redistributed by AI in the first place. So the violation applies both in terms of the literal wording of the licenses and also based on intent.

Moreover, the usage of code by AI goes beyond just a copyright violation of the code/text itself; they appropriated ideas and concepts, without giving due credit to their originators so there is a deeper ethical component involved that we don't have a system to protect human innovation from AI. Human IP is completely unprotected.

That said, I think most open source devs would support AI innovation, but just not at their expense with zero compensation.

debarshri•38m ago
This weekend, I found an issue with Microsoft's new Golang version of sqlcmd. Ran Claude code, fixed the issue, which I wouldn't have done if agent stuff did not exist. The fix was contributed back to the project.

I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem.

mysterydip•29m ago
I think the problem is determining who is contributing, intention, and those other nuances take a human’s time and effort. And at some point the number of contributions becomes too much to sort through.
debarshri•27m ago
I think building enough barriers, processes, and mechanisms might work. I don't think it needs to be human effort.
softwaredoug•27m ago
That’s the positive case IMO - a human, you, remain responsible for the fix. It doesn’t matter if AI helped.

The negative case are free running OpenClaw slop cannons that could even be malicious.

_joel•18m ago
I agree, but that's assuming the project accepts AI generated code, of course. Especially around the legality of accepting commits written by an AI trained on god knows what dataset.
thrance•15m ago
Genuinely interested in the PR, if you would kindly care to link it.
softwaredoug•35m ago
Are there maintainers of mature open source projects that can share their AI coding workflow?

The bias in AI coding discussions heavily skews greenfield. But I want to hear more from maintainers. By their nature they’re more conservative and care about balancing more varied constraints (security, performance, portability, code quality, etc etc) in a very specific vision based on the history of their project. They think of their project more like evolving some foundational thing gradually/safely than always inventing a new thing.

Many of these issues don’t yet matter to new projects. So it’s hard to really compare the greenfield with a 20 year old codebase.

giancarlostoro•33m ago
I mean I have grabbed random non-greenfield projects and added features to them for my temporary / personal needs with Claude Code. The key thing is setting it up. The biggest thing is adopting good programming principles like breaking up godclasses. Things that help human devs consume code easier turns out it works for LLMs too.
softwaredoug•31m ago
I have done this sort of thing too. I’m curious about big, mature projects like numpy or the Linux kernel.

It seems the users of this are so varied that refactors like what you describe would be rolled out more gradually than the usual AI workflow.

silverwind•30m ago
I think AI is a huge boon as it reduces the human bottleneck.

AI is a tool that must to be used well and many people currently raising pull requests seem to think that they don't even need to read the changes which puts unnecessary burden on the maintainers.

The first review must be by the user who prompted the AI, and it must be thorough. Only then I would even consider raising a PR towards any open source project.

ramshanker•28m ago
At least for me personal open source project[1], it has been >5x boost. In speed, motivation. Operating knowledge level etc. At some places, I even put inline comment, "this generated function is not understood completely" ! Or may be a question on specific syntex (c++20).

[1] https://github.com/ramshankerji/Vishwakarma/

OneOffAsk•19m ago
> this generated function is not understood completely

I think this kind of stuff is OK for the most part. I think it's a thrilling part of computer science: building systems so complex they're just on the brink of what can be fully understood by a single person. It's what sets software engineering apart from other engineering fields where it's unacceptable not to fully understand the engineering, say, for factories, buildings, bridges, ships and infrastructure and such.

jandrewrogers•8m ago
It didn’t take AI to destroy Open Source, we were already doing it to ourselves. LLMs just magnified the existing structural issues and made them even easier to exploit. But the trajectory was already clear.
thunderbong•3m ago
Looks to me that the issue is with the PR process, not with open-source.

From the article - > It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.

Dark web agent spotted bedroom wall clue to rescue girl from abuse

https://www.bbc.com/news/articles/cx2gn239exlo
171•colinprince•2h ago•91 comments

Study: Self-generated Agent Skills are useless

https://arxiv.org/abs/2602.12670
273•mustaphah•5h ago•115 comments

14-year-old Miles Wu folded origami pattern that holds 10k times its own weight

https://www.smithsonianmag.com/innovation/this-14-year-old-is-using-origami-to-design-emergency-s...
430•bookofjoe•8h ago•87 comments

Rise of the Triforce

https://dolphin-emu.org/blog/2026/02/16/rise-of-the-triforce/
111•max-m•5h ago•10 comments

AI is destroying Open Source, and it's not even good yet

https://www.jeffgeerling.com/blog/2026/ai-is-destroying-open-source/
162•VorpalWay•2h ago•113 comments

Show HN: Free Alternative to Wispr Flow, Superwhisper, and Monologue

https://github.com/zachlatta/freeflow
110•zachlatta•5h ago•56 comments

Show HN: Scanned 1927-1945 Daily USFS Work Diary

https://forestrydiary.com/
58•dogline•3h ago•8 comments

What every compiler writer should know about programmers (Anton Ertl, 2015) [pdf]

https://www.complang.tuwien.ac.at/kps2015/proceedings/KPS_2015_submission_29.pdf
14•tosh•3d ago•1 comments

What your Bluetooth devices reveal

https://blog.dmcc.io/journal/2026-bluetooth-privacy-bluehood/
325•ssgodderidge•12h ago•129 comments

Visual Introduction to PyTorch

https://0byte.io/articles/pytorch_introduction.html
136•0bytematt•3d ago•13 comments

Building for an audience of one: starting and finishing side projects with AI

https://codemade.net/blog/building-for-one/
18•lorisdev•3h ago•2 comments

Testing Postgres race conditions with synchronization barriers

https://www.lirbank.com/harnessing-postgres-race-conditions
71•lirbank•6h ago•34 comments

Suicide Linux (2009)

https://qntm.org/suicide
88•icwtyjj•6h ago•52 comments

Facing a demographic catastrophe, Ukraine is paying for troops to freeze sperm

https://www.bbc.com/news/articles/cqxd9549y4xo
21•tartoran•55m ago•5 comments

Running NanoClaw in a Docker Shell Sandbox

https://www.docker.com/blog/run-nanoclaw-in-docker-shell-sandboxes/
69•four_fifths•4h ago•26 comments

State of Show HN: 2025

https://blog.sturdystatistics.com/posts/show_hn/
68•kianN•7h ago•13 comments

Qwen3.5: Towards Native Multimodal Agents

https://qwen.ai/blog?id=qwen3.5
390•danielhanchen•17h ago•181 comments

Turing Labs (YC W20) Is Hiring – Founding GTM Sales Hacker

1•turinglabs•6h ago

PCB Rework and Repair Guide [pdf]

https://www.intertronics.co.uk/wp-content/uploads/2017/05/PCB-Rework-and-Repair-Guide.pdf
103•varjag•2d ago•31 comments

Neurons outside the brain

https://essays.debugyourpain.com/p/you-are-not-just-your-brain
64•yichab0d•8h ago•23 comments

Hear the "Amati King Cello", the Oldest Known Cello in Existence

https://www.openculture.com/2021/06/hear-the-amati-king-cello-the-oldest-known-cello-in-existence...
19•tesserato•3d ago•5 comments

Show HN: Jemini – Gemini for the Epstein Files

https://jmail.world/jemini
273•dvrp•21h ago•50 comments

Ghidra by NSA

https://github.com/NationalSecurityAgency/ghidra
321•handfuloflight•2d ago•177 comments

PascalABC.net

https://pascalabc.net:443/en
29•andsoitis•2d ago•8 comments

LCM: Lossless Context Management [pdf]

http://papers.voltropy.com/LCM
35•ClintEhrlich•8h ago•14 comments

Show HN: Maths, CS and AI Compendium

https://github.com/HenryNdubuaku/maths-cs-ai-compendium
58•HenryNdubuaku•11h ago•15 comments

Show HN: 2D Coulomb Gas Simulator

https://simonhalvdansson.github.io/2D-Coulomb-Gas-Tools/index_gpu.html
32•swesnow•7h ago•5 comments

The long tail of LLM-assisted decompilation

https://blog.chrislewis.au/the-long-tail-of-llm-assisted-decompilation/
51•knackers•8h ago•13 comments

Building a model that visualizes strategic golf

https://golfcoursewiki.substack.com/p/i-spent-the-last-month-and-a-half
23•scoofy•9h ago•7 comments

Privilege is bad grammar

https://tadaima.bearblog.dev/privilege-is-bad-grammar/
237•surprisetalk•8h ago•234 comments