I built *CodeDrift*, a CLI tool that detects bugs commonly introduced by AI coding assistants like Copilot, Cursor and ChatGPT.
Over the last year I noticed that AI tools often generate code that compiles correctly, passes linting and looks reasonable in code review but still contains subtle issues.
Some common examples I kept seeing:
* async `forEach` loops that never await promises * missing authorization checks (IDOR) * hallucinated dependencies that don’t exist * stack traces leaking sensitive information * request data used without validation
These bugs often slip past ESLint, TypeScript and even human reviewers because the code looks correct.
CodeDrift parses the code using the TypeScript compiler API and runs a set of detectors looking for these patterns.
Example:
``` async function syncProducts(items) { items.forEach(async (item) => { await updateStock(item.id); }); } ```
CodeDrift output:
``` CRITICAL: async forEach does not await promises Fix: use Promise.all or a for...of loop ```
Another example it detects:
``` Database query using user-supplied ID without authorization check → potential IDOR vulnerability ```
The goal isn’t to replace tools like ESLint or TypeScript, or security scanners like Snyk. It’s meant to act as a safety layer for code generated with AI assistants.
The tool runs locally, requires no cloud access, and can be tried with:
``` npx codedrift ```
I’d love feedback from developers who are using AI coding tools in production.
tayyab1122•1h ago
The hallucinated dependency detector is underrated — AI tools confidently import packages that don't exist and it's embarrassing how easily that slips through. Good to have something that catches it automatically.
Adding this to our dev toolchain.