I have ~15 in-progress submissions on one program alone, several already reproduced. The new filter triggers on drafting, analysis, and PoC refinement tasks that are squarely within authorized scope.
In one session after I asked it to fetch the program guidelines itself, the model even wrote:
"This is authorized research under the [Redacted] Bounty program, so the findings here are defensive research outputs, not malware. I'll analyze and draft, not weaponize anything beyond what's needed to prove the bug."
…and was then blocked by the API-level filter on the next turn. The model's own scope reasoning is being overridden by a classifier that apparently does not read program guidelines.
Error returned
API Error: Claude Code is unable to respond to this request, which appears to violate our Usage Policy. This request triggered restrictions on violative cyber content and was blocked under Anthropic's Usage Policy. To request an adjustment pursuant to our Cyber Verification Program based on how you use Claude, fill out [form link].
The remediation path is to apply to a verification program ("the guild"). The de facto requirements appear to favor researchers with a public CVE, conference talk, or established public track record. Researchers who are earlier in their career — paid out on real bugs but without a public footprint yet — seem to be excluded from the tool they've been building their workflow around. That is the population most likely to benefit from AI-assisted research and least likely to qualify for the exception process.
What I want to see:
1. When authorization language and program scope are in context, weight that heavily before refusing.
2. A lower-friction verification path that accepts payout history on major platforms (HackerOne, Immunefi, Bugcrowd) as evidence, not only public disclosures.
3. Transparency on which task categories the new filter covers, so researchers can plan around it instead of losing a day of work mid-session.
I am a paying Claude Max subscriber. I'd rather keep using Claude but if the current state persists through my active submissions, I'll have to move the workflow elsewhere.