After the Replit database deletion (https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-d...), Claude Code's rm -rf incident, and Google Antigravity wiping a user's D: drive — I built a framework where AI agents can't execute dangerous commands without going through a security layer first.
- Pattern-matching blocklist catches rm -rf, format, DROP TABLE before they reach the shell
- LLM explains what each command does before you approve it
- Skill-based access tiers (new agents start with zero destructive capabilities)
- 16+ LLM providers (OpenAI, Anthropic, Groq, Ollama, etc.)
- Web UI on localhost:4242, SQLite, zero config
- ~2000 LOC TypeScript, MIT license
I'd appreciate feedback on the security model — especially edge cases and bypass vectors I might be missing.
Nextry•1h ago
- Pattern-matching blocklist catches rm -rf, format, DROP TABLE before they reach the shell - LLM explains what each command does before you approve it - Skill-based access tiers (new agents start with zero destructive capabilities) - 16+ LLM providers (OpenAI, Anthropic, Groq, Ollama, etc.) - Web UI on localhost:4242, SQLite, zero config - ~2000 LOC TypeScript, MIT license
I'd appreciate feedback on the security model — especially edge cases and bypass vectors I might be missing.