It is a Rust CLI that hooks into Claude Code's lifecycle. Before large edits or before Claude finishes, another LLM evaluates the work against one question: is this the simplest thing that actually solves the stated problem?
When something's off, Claude is blocked and shown feedback like:
"This may be a local maximum. What alternatives were considered?"
"Is this necessary right now? This seems to solve [hypothetical problem] rather than [actual need]."
Claude then tries to self-corrects, often proposing something simpler or asking clarifying questions it should have asked upfront.
The philosophy: Every line of code is a liability. Every abstraction is a loan. If you can't explain it simply, it's too complex. If it feels clever, be suspicious.
Tradeoffs: Yes, this adds LLM calls. In my experience the tighter results are worth it, but YMMV.
Install: cargo install superego # or brew install cloud-atlas-ai/superego/superego
sg init && claude
The prompt is customizable. The default is opinionated toward simplicity—because the alternative is an AI that confidently builds cathedrals when you asked for a shed.Source-available (free for personal/research use). GitHub: https://github.com/cloud-atlas-ai/superego