We all know the feeling. You ask Cursor or Claude to generate a module. It runs. The tests pass. But you look at the code and just... sigh.
It’s "technically correct" but architecturally inept. It fights the language, and introduces unnecessary abstractions, inconsistent patterns, hides causality behind clever one-liners, or subtly fights the "grain" of the framework. Don't get me started about testing and debugging! But don't worry, there's an "AI Lint" for that, too.
I realized that prompting for "clean code", or vague best practices, wasn't really working. I got tired of seeing it make the same fuck-ups, again and again. I needed to do what I’d do with a bright junior engineer: give them a handbook on Taste.
I wrote a doctrine file -- a set of markdown constraints -- that teaches the agent the difference between code that works and code that belongs.
AI Lint is the productization of that doctrine. It’s not a CLI tool or a SaaS dashboard. It is a set of highly optimized text files you drop into .cursorrules, wire from AGENTS.md, or your System Prompt.
It enforces rules like:
- "Do not invent state if the platform provides it."
- "Clarity over cleverness: if you can't debug it, don't generate it."
- "Make async boundaries explicit."
The Business Model:
I am selling this as a paid digital product (doctrine packs for Apps & Systems). However, I’ve released a Free Preview on GitHub that includes the Core Philosophy and the "Top 10" commandments for JavaScript, so you can test the impact immediately.
I’d love to hear what specific "silent failures" or anti-patterns AI keeps injecting into your stacks. I'm currently expanding the "Rejects" list for the Go and Rust packs, and looking forward to bring forth iOS/Swift/mobile, DB, and Infra (Docker, k8s, etc), later.
keepamovin•1h ago
We all know the feeling. You ask Cursor or Claude to generate a module. It runs. The tests pass. But you look at the code and just... sigh.
It’s "technically correct" but architecturally inept. It fights the language, and introduces unnecessary abstractions, inconsistent patterns, hides causality behind clever one-liners, or subtly fights the "grain" of the framework. Don't get me started about testing and debugging! But don't worry, there's an "AI Lint" for that, too.
I realized that prompting for "clean code", or vague best practices, wasn't really working. I got tired of seeing it make the same fuck-ups, again and again. I needed to do what I’d do with a bright junior engineer: give them a handbook on Taste.
I wrote a doctrine file -- a set of markdown constraints -- that teaches the agent the difference between code that works and code that belongs.
AI Lint is the productization of that doctrine. It’s not a CLI tool or a SaaS dashboard. It is a set of highly optimized text files you drop into .cursorrules, wire from AGENTS.md, or your System Prompt.
It enforces rules like:
- "Do not invent state if the platform provides it."
- "Clarity over cleverness: if you can't debug it, don't generate it."
- "Make async boundaries explicit."
The Business Model:
I am selling this as a paid digital product (doctrine packs for Apps & Systems). However, I’ve released a Free Preview on GitHub that includes the Core Philosophy and the "Top 10" commandments for JavaScript, so you can test the impact immediately.
Links:
Website: https://ai-lint.dosaygo.com Repo (Free Preview): https://github.com/DO-SAY-GO/AI-Lint
I’d love to hear what specific "silent failures" or anti-patterns AI keeps injecting into your stacks. I'm currently expanding the "Rejects" list for the Go and Rust packs, and looking forward to bring forth iOS/Swift/mobile, DB, and Infra (Docker, k8s, etc), later.