Right now the pipeline is: agent writes code, opens a PR, another agent or human reviews it, CI runs all 4000 tests, someone clicks merge. This is the human workflow copy-pasted onto machines. It doesn't scale.
What if a codebase could maintain itself? Agents fix bugs as they appear, add features from specs, refactor when complexity grows, update dependencies. All verified automatically, shipped continuously. Humans set policies ("anything touching auth needs my approval", "cosmetic changes auto-ship") and only see the exceptions.
The pieces that make this possible: a dependency graph of every function, class, and method in the codebase. Not files, not lines. Entities. If you know what every entity depends on, you can compute blast radius instantly, detect real conflicts (not the false ones git creates), run only the 4 tests that matter instead of 4000, and score confidence per change. High confidence ships automatically. Low confidence routes to a human.
verdverm•1h ago
rs545837•1h ago
But self-sustaining codebases aren't using AI to verify AI. The verification layer is deterministic: dependency graphs, targeted test suites, blast radius computation. These are structural checks, not generative ones. The graph doesn't hallucinate. Tests either pass or they don't
The claw spam problem is what happens when you have no verification at all.
verdverm•42m ago