Not sure if this is worth sharing, but I've been using AI coding assistants heavily for the past few months and kept running into the same frustrating pattern.
I'd have these amazing flow sessions with Claude or other AI tools where we'd build something that felt brilliant. The code looked clean, the architecture seemed solid, and I'd go to bed feeling productive.
Then I'd wake up and actually try to use what we built. Half the functions were just sophisticated-looking stubs. Error handling that caught exceptions just to ignore them. TODOs that were more like "TODO: figure out how this should actually work."
The worst part wasn't that the AI was wrong - it was that the AI was convincingly wrong. In the moment, everything felt right because the code looked professional and the comments were confident.
So I started building this tool called "sniff" (yeah, like sniffing out BS) to catch these patterns in real-time. It looks for things like:
* Functions that claim to do X but actually just return a default value
* Error handling that's all ceremony and no substance
* Comments that overpromise what the code delivers
The weird part was using AI to help build the tool that catches AI mistakes. Meta level stuff where sniff would analyze its own improvements and flag them as suspicious. "Your new feature detection is just an untested regex" -thanks, tool I just wrote.
I've been using it for months now and it's honestly changed how I work with AI assistants. Still get the creative benefits but with a reality check built in.
Anyway, I open sourced it in case anyone else has dealt with this. Maybe it's just me overthinking things, but figured I'd share: https://github.com/conikeec/sniff
Not trying to solve world hunger here, just scratching my own itch. Let me know if you've had similar experiences with AI coding tools - curious if this resonates with others or if I'm just paranoid about my own code.
conikeec•21h ago
Not sure if this is worth sharing, but I've been using AI coding assistants heavily for the past few months and kept running into the same frustrating pattern.
I'd have these amazing flow sessions with Claude or other AI tools where we'd build something that felt brilliant. The code looked clean, the architecture seemed solid, and I'd go to bed feeling productive.
Then I'd wake up and actually try to use what we built. Half the functions were just sophisticated-looking stubs. Error handling that caught exceptions just to ignore them. TODOs that were more like "TODO: figure out how this should actually work."
The worst part wasn't that the AI was wrong - it was that the AI was convincingly wrong. In the moment, everything felt right because the code looked professional and the comments were confident.
So I started building this tool called "sniff" (yeah, like sniffing out BS) to catch these patterns in real-time. It looks for things like:
* Functions that claim to do X but actually just return a default value
* Error handling that's all ceremony and no substance
* Comments that overpromise what the code delivers
The weird part was using AI to help build the tool that catches AI mistakes. Meta level stuff where sniff would analyze its own improvements and flag them as suspicious. "Your new feature detection is just an untested regex" -thanks, tool I just wrote.
I've been using it for months now and it's honestly changed how I work with AI assistants. Still get the creative benefits but with a reality check built in.
Anyway, I open sourced it in case anyone else has dealt with this. Maybe it's just me overthinking things, but figured I'd share: https://github.com/conikeec/sniff
https://conikeec.substack.com/p/how-i-built-an-vibe-coding-m...
Not trying to solve world hunger here, just scratching my own itch. Let me know if you've had similar experiences with AI coding tools - curious if this resonates with others or if I'm just paranoid about my own code.