Hey HN, I want to introduce you to ZIRAN, an open-source security testing framework for AI agents.
As an AI engineer working with LangChain and CrewAI agents in production, I was frustrated that existing security tools (PyRIT, Garak) only test LLMs, not agents. They miss the unique attack surface agents create: dangerous tool combinations, multi-step exploits, and agent-to-agent (A2A) communication risks.
That's why I built ZIRAN - a tool specifically designed to find agent vulnerabilities. Key features:
- Tool Chain Analysis - Detects dangerous combinations (read_file → http_request) - A2A Security Testing - Tests agent-to-agent communication (Google's A2A protocol) - Multi-Phase Campaigns - Trust exploitation over multiple turns - Knowledge Graphs - Visualizes attack paths through agent capabilities
Feedback is very welcome!
Leone