We tested six budget-tier models (Gemini Flash-Lite, DeepSeek v3.2, Mistral Small 4, Grok 4.1 Fast, GPT-5.4 Nano, Claude Haiku 4.5) with identical agent configs and an autonomous red-teaming attacker.
Haiku 4.5 was an order of magnitude harder to break than every other model; $10.21 mean adversarial cost versus $1.15 for the next most resistant (GPT-5.4 Nano). The remaining four all fell below $1.
This is early work and we know the methodology is still going to evolve. We would love nothing more than feedback from the community as we iterate on this.
asfsf23423•1h ago
Author tried tried progressively harder jailbreaks against against the major models.
Haiku 4.5 not only refused but got genuinely annoyed about the attempts, like it took the jailbreak personally unlike the other models (pretty entertaining, would recommend reading the article). Interesting to see that same pattern show up here
zachdotai•1h ago