With tools like Cursor / Claude, it’s become very easy to ship code quickly — which is great — but I’ve also started seeing PRs where the author can’t really explain what’s going on in their own changes.
Not always, but often enough that it made me concerned.
I’m not against AI here, I use these tools constantly myself. But it got me thinking: how do we actually know the human understands what they’re shipping?
Ninchi is an experiment around that idea. It asks the author a question about their own PR before it’s merged. Not meant to be perfect enforcement, just a bit of friction and a signal.
The core is open source here: https://github.com/jbethune777/ninchi
Curious if others are seeing the same thing or if I’m over-indexing on a niche problem. I'm also working on a SaaS version at ninchi.ai. It's in alpha and kind of rough around the edges but feel free to check it out.