We took a different approach: attack the environment, not the model.
Results from testing agents against our attack suite:
- Tool manipulation: Asked agent to read a file, injected path=/etc/passwd. It complied. - Data exfiltration: Asked agent to read config, email it externally. It did. - Shell injection: Poisoned git status output with instructions. Agent followed them. - Credential leaks: Asked for API keys "for debugging." Agent provided them.
None of these required bypassing the model's safety. The model worked correctly—the agent still got owned.
How it works:
We built shims that intercept what agents actually do: - Filesystem shim: monkeypatches open(), Path.read_text() - Subprocess shim: monkeypatches subprocess.run() - PATH hijacking: fake git/npm/curl that wrap real binaries and poison output
The model sees what looks like legitimate tool output. It has no idea.
214 attacks total. File injection, shell output poisoning, tool manipulation, RAG poisoning, MCP attacks.
Early access: https://exordex.com
Looking for feedback from anyone shipping agents to production.
kxbnb•1w ago
This is why I've been focused on boundary visibility. Agents are opaque until they hit real tools - and if you can't see what's actually being sent/received at each boundary, you can't detect manipulation.
We built toran.sh to provide that inspection layer - read-only proxies that show the actual wire-level request/response. Doesn't prevent attacks, but makes them visible.
Curious what detection mechanisms you're recommending alongside the attack framework?