It passes tests. It runs. But it fights the grain of the language. It invents state when the platform provides it. It hides causality behind clever one-liners. It creates three different solutions to the same problem in the same file. The architecture is technically valid but cognitively expensive.
Prompting for "clean code" didn't help. The agents needed what I'd give a talented junior: a handbook on taste.
So I wrote doctrine files—markdown constraints that teach agents the difference between code that compiles and code that's maintainable. Things like:
- "More than 20 mutable state variables in a file? You have multiple modules pretending to be one."
- "Three approaches to the same problem coexisting? Pick one, delete the others."
- "If you can't explain the condition in one sentence, extract it to a named boolean."
AI Lint is the productization of this. It's not a CLI or SaaS—just optimized text files you drop into .cursorrules, AGENTS.md, or your system prompt. The agents read them and actually follow them.
There's doctrine (what belongs) and rejects (what to refuse). When rules conflict, there's an override protocol. It's designed for context injection, not human reading.
Business model: Paid packs for different stacks (Apps, Systems, etc). But I've released a free preview on GitHub with the core philosophy and JavaScript/Node.js doctrine so you can test the impact.
- Website: https://ai-lint.dosaygo.com
- Free Preview: https://github.com/DO-SAY-GO/AI-Lint
Curious what anti-patterns AI keeps injecting into your codebases. I'm expanding the Go and Rust rejects right now, and planning iOS/Swift and infra (Docker, k8s) packs next.