I'm a pentester, and the recent wave of security issues with AI agent frameworks (exposed API keys, RCE vulnerabilities, malicious marketplace plugins) made me uncomfortable enough to build something different.
Hydra runs every AI agent inside its own container. Agents start with nothing, and only sees what you explicitly declare (mounts, secrets, etc). Mounts and secrets require agreement between two independent config files (the agent config and a separate host-level allowlist), so even if an agent's config gets tampered with, it can't escalate its own access.
Two modes of interaction:
- `hydra exec` gives you a full interactive Claude Code session inside the restricted agent container
- Orchestrated mode for automation: agents communicate via filesystem-based IPC for things like Telegram bots or scheduled tasks
The project was inspired by NanoClaw and completely redesigned to support contained Claude Code sessions with per-agent mounts, secrets, and MCP servers.
You can find the repo here: https://github.com/RickConsole/hydra and the Readme has the link to the writeup for it.
Happy to answer any questions about the architecture or threat model!