Shannon solves this:
- Create customized agents with different models (Opus/Sonnet/Haiku) and system prompts - Build team workflows with a drag-and-drop DAG editor (parallel, sequential, or fully custom) - Describe your goal in natural language → AI analyzes your codebase and proposes a task plan with dependencies - Watch everything in real-time: task graph, agent chat, code diffs
There's also a Monaco-based prompt editor with semantic syntax highlighting for XML tags (the kind Claude responds well to), autocomplete, and an "AI Improve" button that rewrites your system prompt in one click.
Tech: Go backend, React frontend, Wails v2 for the desktop shell, SQLite for storage. It shells out to the Claude Code CLI under the hood — not calling the API directly — so you get all of Claude Code's built-in tools (file editing, bash, etc.)
Named after Claude Shannon, as you might guess.
Limitations: requires Claude Code CLI installed and authenticated. Local-only desktop app. Hobby project — expect rough edges. Workspace copies can eat disk space on large repos.
Linux and Windows builds available. MIT licensed.
thesameerpanda•45m ago
The execution context matters more than the prompt itself. Been thinking about this from the opposite angle - not just capturing workflows, but making them actually repeatable when you're the only person doing everything. The judgment calls are the hardest part to systematize.
How are you handling versioning when a workflow evolves based on what actually worked versus what you planned?