Edit: it is genuinely wild, I don't know of another product category that selects so perfectly for the WORST type of person to be it's enthusiast. Just every single person I see hyped about AI is fucking insufferable on at least one and usually multiple axis.
AI is the fucking problem. Yes, it has (some) uses. It is not nearly the number advertised. And more and more the median use case seems to be, again, overloading people actually trying to do work with an avalanche of bullshit.
The solution is exactly what the linked article says: shut it down. The AI people have ruined another good thing that was both beneficial to the project, and to a number of individuals.
At this point it's impossible, so I concur with the parent: forget about the shutting it down and think of something actually realistic.
Hell, go Google "AI maintainer abuse" and Google's fucking own AI will tell you why it sucks and how it's creating similar issues all over the damn place, with similar results: OSS projects having to close the gates some amount and/or deal with a deluge of horseshit submissions.
Why is it everyone else's job to do your thinking?
Like I don't like posting this angry but I am so fucking sick of this. Over and over: the drawbacks are shown, people say "yeah we called that" and somebody comes up to be like "well how do we fix it," "your tone is bad," "we have to get proper solutions," as though this stuff hasn't been discussed in great, agonizing detail since mid-2023. If you genuinely don't know, it's because you didn't WANT to know. And you probably still don't.
We all had that one "productive" engineer in our teams who would write huge PRs that would have large swaths of refactoring whether warranted or not and that was way before anyone even could imagine in their wildest dreams that neural networks could generate that huge amounts of code.
The net effect of such a "productive" engineer always was that instead of increasing the team velocity, team would come to a crawling pace because either his PR had to be reviewed in detail eating up all the time and/or if you just did cursory LGTM then they blew up in production meanwhile forcing everyone back to the drawing board but project architecture would have shifted so rapidly due to his "productivity" that no one had a clear picture of the codebase such as what's where except that one "super smart talented productive loyal to the company goals" guy.
“Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado. The tactical tornado is a prolific programmer who pumps out code far faster than others but works in a totally tactical fashion. When it comes to implementing a quick feature, nobody gets it done faster than the tactical tornado. In some organizations, management treats tactical tornadoes as heroes. However, tactical tornadoes leave behind a wake of destruction. They are rarely considered heroes by the engineers who must work with their code in the future. Typically, other engineers must clean up the messes left behind by the tactical tornado, which makes it appear that those engineers (who are the real heroes) are making slower progress than the tactical tornado.” - John Ousterhout, A Philosophy of Software Design
A better example would be if you’d changed the behavior of the library as you did this work, and the library changes introduced hard-to-detect bugs across the application.
If you don't ever have a massive PR from a dynamite session, then you cannot ever be better than "average and plodding". So the question is, what's the context of the massive PR and how should it be handled?
* Mature product making money, intermediate engineer just refactored everything so it's "better"? Shut the fuck up, kindly please, you will have to demonstrate that you understand why things are this way and why it's better before we even have this conversation.
* Greenfield dev, trusted engineer getting from 0 -> 1 on something big? Maybe it shouldn't be held up in committee for 2 weeks. Maybe most objections will be superficial stylistic concerns.
Obviously there are many other contexts and these are 2 extremes in a multi-dimensional space. But if the process is "we litigate every line", then that's just not an innovative place to be. Yes, most PRs should be small, targeted, easy to review and tied to a ticket but if you're innovating? By definition it's a little different.
Even with AI, just tell it to make smaller self contained PRs. I do this with Claude or GPT models and they do just fine.
Do you want one big PR or 100 small ones? You can't escape the sheer volume of code it's going to produce.
So all we have to do is write code without reading or understanding it! Larry Wall was right all along!
Anyone trying to suggest that AI hasn't sped up quality code production is just insisting on keeping their head in the sand, IMO.
It can't be on individual maintainers to stop this, imo its on Github (and Gitlab) to stop these sort of accounts from even getting to the point of submitting PRs. Its essentially spam.
Look at the user who created the first PR they reference https://github.com/Samuelsills. This is not an account that should be allowed to do anything close to opening a PR against a well known repo.
I'm not trying to suggest they _need_ to implement it. Like I said, closing it is reasonable. Completely aside from any other considerations, one could just decide that they don't feel like dealing with it. But there are other options.
I'd say closing a program which doesn't work anymore is a better idea.
If you can think of something that isn't solved by one of those two mechanisms, I'd be interested in hearing them enumerated.
If they have to pay for reviewer time for each of 1000 reports, then the scheme stops being viable.
It's even possible to directly link this to maintainers/employees - if you can review 10 such AI/real things per hour (likely more if it's AI slop that's easy to detect), you're generating another revenue stream. Now, I have no idea if these guys are based in SF Bay or a 3rd world country with low COL but as an "add on", $100 an hour isn't too shabby (and can be on the "low end" if one's good at spotting AI crap.)
Side note, isn't it possible to have some way to verify if the "vulns" are actual vulns or not? ...Heck why not throw an LLM at it, powered by a single $10 submission fee?
> It is possible to set up automated systems to gatekeep this, but with a non-negligible dollar value attached to it, the incentive is just too great for the AIs to just keep arguing, reopening the same PR, etc.
I was thinking of using it for my full stack Rust apps just so everything works with cargo and I don't have to bring in SQLite separately.
https://github.com/UnsafeLabs/Bounty-Hunters
The corresponding leaderboard:
k2xl•45m ago
*Edit - I get it. It seems like the authentication is a challenge.
moron4hire•42m ago
vrighter•42m ago
JoshTriplett•41m ago
New identities are cheap.
icoder•33m ago
blharr•37m ago
empath75•37m ago
mapt•34m ago
Denominated in BTC to avoid chargebacks etc.
JoshTriplett•8m ago