I maintain a few small open source projects and the PR quality went off a cliff once AI coding tools got popular. People generate something that compiles and looks reasonable but is wrong in ways that take me longer to explain than to just fix myself.
The weird part is I use AI heavily for my own code and it's great. But I know the codebase since I wrote it from scratch. Someone who doesn't know it feeding prompts to Copilot produces code that passes lint and still misses the point entirely.
Hashimoto locking contributions to vouched users makes total sense. The old assumption was that effort implied understanding. That's just not true anymore.
vunderba•1h ago
This has been my experience too. AI is like handing someone a motorcycle. Sure they're going to move faster but those without a map won’t necessarily be heading in the right direction.
svstoyanovv•1h ago
Have you tried, as Peter Steinberg suggested, "I ask for Specifications to be submitted, I can generate then the code based on that spec in a minute" - what we also find useful is exactly this, if a contributor wants to submit a PR he also submits a specification from a template that we have given. We have automated pipelines to verify the quality of that specification with GitHub Actions. That way, you can see the reason, deep thinking, insights, what the use cases were that the contributor tried solving, and the environment in which this was done.
emotiveengine•1h ago
The weird part is I use AI heavily for my own code and it's great. But I know the codebase since I wrote it from scratch. Someone who doesn't know it feeding prompts to Copilot produces code that passes lint and still misses the point entirely.
Hashimoto locking contributions to vouched users makes total sense. The old assumption was that effort implied understanding. That's just not true anymore.
vunderba•1h ago
svstoyanovv•1h ago