There's a fragmented AI development problem: - Five developers building similar features, and all of them start from scratch - No way to share "here's how we prompt AI to follow our architecture" - No trail of how the AI changes were generated - People leave the company, so does the their knowledge
Aviator is a developer productivity platform with tools like MergeQueue, Stacked PRs, Releases (https://docs.aviator.co/). Runbooks came from watching our own team and our customers struggle with fragmented AI adoption.
With Runbooks 1. Create executable specs - Plan (with AI) that captures intent, constraints, and steps before AI touches code 2. Version control everything - Specs, AI conversations, and generated changes are all versioned. Fork, improve, roll back 3. Make it multiplayer - Multiple engineers collaborate in the same AI coding session. 4. Build a template library - Migrate one test from Enzyme to React Testing library → use that Runbook to batch migrate the entire test suite.
We're not replacing Claude Code or Cursor. Runbooks is powered by Claude Code. We’re just making it work at team scale.
--
Explore our prebuilt template from our open-source library: https://github.com/aviator-co/runbooks-library
Templates cover migrations, refactoring, and modernization. They're code-agnostic starting points that generate Runbooks using your code context.
Docs and quickstart: https://docs.aviator.co/runbooks
--
About the name: Yes, we know "runbooks" makes you think incident management. But technically a runbook is just a documented, step-by-step procedure—which is exactly what these are for AI agents. We're keeping it!
Happy to get feedback, answer questions about architecture, context management, sandboxes.