GEO is the idea that AI search engines (ChatGPT, Perplexity, Claude) cite content differently than Google ranks it. Things like question-formatted headings, FAQ sections, entity density, E-E-A-T signals, and citation-ready statistics all matter for whether an LLM will pull from your content. geo-lint has 35 rules specifically for this.
The interesting part is the lint loop. It ships as a Claude Code skill — you run /geo-lint audit and it spawns parallel subagents, one per file. Each agent reads the violations, edits the content, re-lints, and repeats until clean (max 5 passes). The linter is fully deterministic (no LLM in the rules themselves), so the agent gets unambiguous violation + suggestion pairs to act on. Zero hallucination risk in the analysis layer.
It also works without Claude Code — npx geo-lint --format=json gives you a flat JSON array any agent (Cursor, Copilot, Windsurf) can consume. The rules are the same either way.
MIT licensed, zero runtime deps beyond gray-matter. npm: @ijonis/geo-lint
ijonis•8h ago
Turns out AI engines look for different signals — question-formatted headings, entity density, FAQ sections, citation-ready statistics. Nobody had a linter for this, so I built one.
The lint loop is the part I'm most proud of: the linter outputs deterministic JSON (no LLM involved), then Claude Code agents consume those violations and fix them autonomously. One subagent per file, in parallel, max 5 passes. The separation matters — deterministic analysis, non-deterministic fixes.
Happy to answer questions about GEO rules, the Claude Code skill architecture, or the lint loop design.