I built a collection of 100+ specialized subagents for Claude Code that act like on-call experts (Python, React, Postgres, Docker, Stripe, Kafka, Prometheus, etc.). They can be auto-invoked by context or explicitly called (“use the postgres-expert to design indexes”), and each carries a focused system prompt + quality checklist. The runner picks a Claude model per task to balance speed/cost.
- Covers languages, frameworks, DBs/ORMs, DevOps, testing, ML, observability, auth, payments, messaging, and more.
- Zero server: they’re just files in ~/.claude/agents/.
- MIT-licensed and contributions welcome.
I’d love feedback on missing domains, rough edges in prompts, and real-world cases where this helped (or failed). Thanks!
mutant•5mo ago
# Effective steering stack: "FastAPI + SQLAlchemy + Redis" scale: "10k RPS, sub-50ms P99" deployment: "K8s, multi-region" constraints: ["async-first", "12-factor", "observability"]
# Not this python-expert: "You are an expert in advanced Python..."
# This context: "Building FastAPI backend, PostgreSQL, Redis cache, Docker deployment" constraints: "Sub-100ms response times, 10k concurrent users" preferences: "Async-first, type hints, structured logging"
I stopped telling ai how to do their jobs a long time ago, and started context management, I get crazy better results. The only time i need to bash training in is when it doesn't know an API, then I spawn a research agent to create an updated training prompt for an API, or command, then import it as needed. Keeps the primary context window cleaner for longer.