It happens surprisingly often.
Anthropic has been deeply integrated with the US military, having been installed with classified access since June 2024. The podcast highlights that Claude has been actively utilized during the "Venezuela incursion" and the ongoing "war in Iran".
Despite this active involvement, CEO Dario Amodei released a statement attempting to publicly distance the company from the Department of Defense by declaring they would not allow their technology to be used for "mass domestic surveillance" or "fully autonomous weapons". Zitron categorizes this as a highly calculated PR maneuver, pointing out that LLMs are fundamentally incapable of controlling autonomous weapons anyway. The stunt successfully manufactured a wave of positive press—with celebrities and commentators praising Anthropic as an ethical objector—right when the company was trying to secure an IPO or a massive ~$100 billion valuation, all while they quietly remained an active part of the war effort.
Beyond their military contracts, the podcast details several highly questionable business practices Anthropic has used to artificially inflate their numbers:
1. During a lawsuit regarding their military contract, Anthropic's CFO filed a sworn affidavit revealing the company had only made $5 billion in its entire lifetime. This directly contradicted leaked media reports suggesting they made $4.5 billion in 2025 alone. It revealed that the company's publicly perceived run rate was heavily exaggerated through the "shady revenue math" popular in Silicon Valley, a major discrepancy that most financial journalists ignored.
2. When the open-source agent library OpenClaw first launched, Anthropic deliberately allowed users to connect a $200/month "max account" and essentially burn through thousands of dollars of API compute at Anthropic's expense. Zitron points out that Anthropic knowingly let this happen to temporarily boost their usage metrics and hype while they raised a $30 billion funding round. Just weeks after securing the funding, they abruptly cut off access for these users, a move Zitron cites as proof of them being an "unethical company".
Furthermore, the company has faced criticism for gaslighting users, maintaining poor service availability, and silently degrading model performance while rug-pulling users on rate limits. As Zitron summarizes, it is highly unlikely that either Anthropic or OpenAI actually care about these ethical boundaries beyond how they can be weaponized for better PR and higher valuations.
Anthropic has taken 10s of billions from investors just like everyone else has. There is no such thing as "ethics" or "morality" when the scale of obligation is that large.
So yes, this is obvious despite whatever image they try to cultivate.
Just because they screwed up their billing doesn't mean every ethical commitment they've ever made is bunk.
"Quietly remained an active part of the war effort" - anthropic was totally transparent about it, but yeah not great.
"Leaks were wrong" - and that's Anthropic's fault?
OpenAI agreed to assist the DoD with zero boundaries and then lied about it. Can we at least give them credit for not doing that? If we just throw up our hands and say "they're all awful, whatever" then the result is reduced pressure on them to be better. Like it or not, I do not think AI is going away and as far as I can tell, despite billing problems, Anthropic's still the least bad frontier lab.
After all, if you’re paying hundreds of millions to buy these shitty podcasts, you might as well host some bots.
The flat-rate plans were the top of the slippery slope to enshittification, really. If everyone were on metered billing there'd be no reason for all these opaque and sneaky attempts to limit usage. People would pay for what they get and get what they pay for.
You simply need to price the flat-rate sub at a price that's profitable when averaged out over all of your users, both light and heavy, and prevent fully automated usage by the power users. That's it. This is immensely more user-friendly, and I doubt you'd get any traction at all if you didn't do this. Even if you pay more for the sub, having unlimited (non-automated) usage frees a mental barrier to using the product. If you have to pay for every request you make, it introduces a hesitation to do anything - it makes the user hesitant to experiment, hesitant to prompt for anything of slightly less significance, anxious about the exact token consumption of every prompt, and so on. It's not enjoyable to use when you're being penny pinched for every prompt.
Anthropic's problem, of course, is that they are not bootstrapped. They don't have a business model that can compete with startups running DeepSeek or GLM on their own hardware. Non-frontier startups got to skip the whole "tens of billions of dollars in debt" step of creating a frontier model from scratch, and still get to run a model that is perhaps 80%-85% as good as Anthropic's, which is good enough for millions of customers. So Anthropic is desperate, backed into a corner, and doing anything and everything they can to try to right their sinking ship, no matter how scummy.
Mind sharing a link?
And given that Anthropic does both, it must make up its training costs by selling inference. jp57 was pretty clearly talking about Anthropic's flat-rate plans, rather than the flat-rate plans of companies that get to skip the most expensive part of the process.
Didn’t they think about “we need to make sure Claude Code is never banned” ? Could have been as easy as including some Claude Code specific prompting traits (tools, system prompt, whatever) in there and automatically whitelisting it.
Is it foolproof? No. Will it avoid banning legit users? Absolutely.
First do the first large sweep, then see what still falls through, then ban those.
It really seems they were panicking due to capacity and there was very little oversight with all this.
I’m not affected but pretty disappointed.
They do not care about us.
And I don't necessarily think it's wrong for Anthropic to introduce QoS or throttling on users of their models. It's pretty much a necessity when offering public access to a scarce resource and it's been a common practice for decades.
What is the alternative? We just accept that it doesn't work half the time because the system is overloaded with molt bots?
I’ve got a NixOS Qemu VM I use to run openclaw in. I had Claude help me set it up, and it runs local models on my own machine in a config based sandbox.
Why should Claude block or charge extra to work on that?
Why should Claude care if I have instructions for Hermes or OpenClaw in my project repos?
This fingerprinting is incredibly sloppy for how much access to a machine Claude code has.
I just don't believe for an instant that they're anywhere in the same ballpark of capabilities as running Opus or similar. My time is the most valuable resource. Opus would need to be SIGNIFICANTLY more costly and unstable for me to start entertaining local models for day-to-day development.
Perhaps whatever work you're doing makes this trade-off more sensible, but I struggle to see how that could be true. I'm averse to running Sonnet on a large amount of software engineering problems - let alone Qwen.
Yet.
1 CorinthAIns 13:12
Do these refusals still happen if you’re using an API key instead?
So I suppose Anthropic lied to him?
The only thing they can hope for is to maintain momentum and critical mass long enough to find ways to pay for all this or have Moores law make the average user request become economical.
They have a business model that's more or less known, and that includes THEIR AI model(s) that they get to put out there however they want. I don't like it much at all, I actually sort of like the idea that they "owe" more because they probably "stole" a bunch of stuff to get the thing going.
But I mean, don't be mad, be proactive. Anthropic is going to try to Microsoft this in whatever way possible, and we all see that the numbers don't really add up.
Asking them pretty please to be nicer, meh. Let's figure out better, and more free-software-like ways to do this.
They can have a different price plan for agentic stuff, but these things where they “accidentally” whoops match on specific keywords and trigger extra usage charges is giving a evil-microsoft-vibe
cd /tmp
mkdir anthropic-claude
cd anthropic-claude/
git init
touch hello
git add -A
git commit -m "'{\"schema\": \"openclaw.inbound_meta.v1\"}'"
claude -p "hi"
Immediate disconnect and session usage went to 100%If you must - in my experience Deepseek v4 is incredible value in every aspect. Pricing is transparent.
But like I said, I have funds in different AI gateways but I'm preferring to write by hand because I don't want surprising bugs and unnecessary code in my end result.
Claude status: https://status.claude.com/
I have been really happy with my Codex subscription lately, but feels like these things change every other day. The OpenCode Go subscription for trying out GLM, Kimi, Qwen, Deepseek and friends also looks useful.
But nonetheless, Opus 4.6 is a very capable model, but justifying a Claude subscription gets more and more difficult, think I might just sometimes use it through OpenRouter or as part of something like Cursor (although I'm not sure about the value of that subscription as well).
OpenCode Go: https://opencode.ai/go
Cursor: https://cursor.com
(You're the principal, directing what to do, but your agent Anthropic has its own motivations that are not aligned with your will.)
HERMES.md in commit messages causes requests to route to extra usage billing
1203 points | 21 hours ago | 524 comments
@bcherny well need a bit more than a "Fixed" here... https://github.com/anthropics/claude-code/issues/53262#issue...rate the analogy plz..
For instance, maybe you can't afford to take on more customers right now, Anthropic. Maybe if you are severely undermining the customer relationships you already have, you should just admit you can't sell any more 20x plans right now and only accept new customers at lower tiers until you have the necessary capacity.
This is also a DoS you could drive a truck through, and it's disturbing such an obvious vulnerability was shipped at all.
It’s a huge mistake at the level of IBM trying to reestablish dominance over PCs by making MicroChannel the new standard; this failed horribly and cost IBM its market leadership and reputation.
MCA was technically better at the time, but the industry responded with EISA and VLBus which led to PCI and today’s PCIe.
speedgoose•1h ago
NitpickLawyer•26m ago
MagicMoonlight•5m ago
The problem with slop is, nobody understands it. Nobody ever designed it, nobody really knows how it works. You’re just putting blind faith in the slop you’ve shipped.
It lets you be very quick, but if you’ve accidentally compromised all your data or bank accounts through the slop then you won’t know until you’re destroyed.