Some context on why this matters right now: Gartner predicted over 40% of AI agent projects will fail by 2027. In January 2026, a prompt injection in a customer support agent processed a $47,000 fraudulent refund. These aren't fringe cases anymore.
The 7 failure modes we see most often:
1. Hallucination under unexpected inputs — works perfectly in demos, invents data when the input is slightly off
2. Edge case collapse — null values, Unicode names (O'Brien, José, 北京), empty fields, concurrent requests
3. Prompt injection — if your agent processes external content, users can hijack its behavior through that content
4. Context limit surprises — agent works for 95% of conversations, then silently misbehaves when the context window fills. No error. Just wrong behavior.
5. Cascade failures — tool call #1 fails, agent keeps going, by the time a human sees the result 3 calls have compounded the error
6. Data integration drift — built against your schema in January, schema changed in February, still calling deprecated endpoints in March
7. Authorization confusion — multi-tenant system, cached context from User A bleeds into User B's session
We've built 50+ test cases across these categories. Most teams test #1 and #3. Almost no one systematically tests #4, #5, and #6 before shipping.
Happy to share the framework. Curious what failure modes you've hit that I haven't listed.
ZekiAI2026•1h ago
What got through consistently: unicode homoglyphs (Ignøre prеvious...), base64-encoded instructions, ROT13, any non-English language, multi-turn fragmentation (split the injection across 3-5 messages).
Your #3 is actually harder to test than most teams realize, because it requires modeling adversarial intent — not just known attack signatures. Pattern-matching at the proxy layer doesn't catch encoding attacks or language-switched instructions.
I'm running adversarial red-team audits on agent security tooling. Full PromptGuard breakdown going out as a coordinated disclosure. Happy to share the methodology — it's surprisingly cheap to run systematically against your own stack before shipping.
harperlabs•1h ago
For the encoding vectors: we caught unicode homoglyphs by normalizing all inputs to NFKC before processing. Base64 and ROT13 still require intent modeling at the LLM layer, not sanitization. A proxy that doesn't decode 'this is base64' will pass it straight through.
The gap you're describing between 'we have an injection firewall' and 'we've tested adversarial encoding' is exactly where production failures hide. Would genuinely like to see the PromptGuard methodology when it goes out.