Most developers don't really review AI-generated code.
In many cases, it "looks fine" at first glance, but contains issues like: - Hardcoded secrets (API keys, tokens) - Unsafe patterns (eval/exec, insecure deserialization) - Prompt injection hidden in comments or instructions
So I built something to test this idea.
It's a lightweight proxy that sits between the IDE and the LLM, and analyzes output in real-time before it reaches the developer.
The goal isn't to replace code review, but to catch risky patterns at the moment they're generated.
Curious if others are seeing the same issue with AI-assisted development.
(If anyone’s curious, I put together a VS Code extension for this)
skypark0407•1h ago
The code “looks fine”, compiles, even passes basic tests.
That’s what makes this tricky — the risk is subtle, not obvious.