Its always been socialize the losses, and capitalize the gains. And all the while, legislating rules to block upstart companies.
Its never ever been "fair". He who has the most gold makes the rules.
It would be like if Carnegie Steel somehow could have prohibited people from buying their steel in order to build a steel mill.
And moreover in this case the entire industry as it exists today wouldn't exist without massive copyright infringement, so it's double-extra ironic because Anthropic came into existence in order to make money off of breaking other people's rules, and now they want to set up their own rules.
Is this them saying that their human developers don’t add much to their product beyond what the AI does for them?
You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?
From the thread:
> A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.
that said, it is absolutely being enforced against other big model labs who are mostly banned from using claude.
TLDR You cannot reverse engineer the oauth API without encountering this message:
https://tcdent-pub.s3.us-west-2.amazonaws.com/cc_oauth_api_e...
Add in various 2nd/3rd place players (Codex and Copilot) with employees openly using their personal accounts to cash in on the situation and there's a lot of amplification going on here.
Based on the terms, Section 3, subsection 2 prohibits using Claude/Anthropic's Services:
"To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services."
Clarification:
This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.
What IS prohibited:
- Using Claude to develop a competing AI assistant or chatbot service
- Training models that would directly compete with Claude's capabilities
- Building a product that would be a substitute for Anthropic's services
What is NOT prohibited:
- General ML/AI development for your own applications (computer vision, recommendation systems, fraud detection, etc.)
- Using Claude as a coding assistant for ML projects
- Training domain-specific models for your business needs
- Research and educational ML work
- Any ML development that doesn't create a competing AI service
In short: I can absolutely help you develop and train ML models for legitimate use cases. The restriction only applies if you're trying to build something that would compete directly with Claude/Anthropic's core business.
So you can't use Claude to build your own chatbot that does anything remotely like Claude, which would be, basically any LLM chatbot.I am not a lawyer, regardless of the list of examples below(I have been told examples in contracts and TOS are a mixed bag for enforceability), this text says that if anthropic decides to make a product like yours you have to stop using Claude for that product.
That is a pretty powerful argument against depending heavily on or solely on Claude.
And I'm pretty sure ban evasion can become an issue in the court of law, even if the original TOS may not hold up
What I do give a crap about is the AI companies being little bitches when you politely pilfer what they have already snatched. Their hypocrisy is unlimited.
but also the prohibition goes way further as it's not limited to training competing LLMs but also for programming any of the plumbing etc. around it ....
I know we want to turn everything into a rental economy aka the financialization of everything, but this is just super silly.
I hope we're 2-3 years away, at most, from fully open source and open weights models that can run on hardware you can buy with $2000 and that can complete most things Opus 4.5 can do today, even if slower or that needs a bit more handholding.
Including how much RAM?
In theory, China could catch up on memory manufacturing and break the OMEC oligopoly, but they could also pursue high profits and stock prices, if they accept global shrinkage of PC and mobile device supply chains (e.g. Xiaomi pivoted to EVs), https://news.ycombinator.com/item?id=46415338#46419776 | https://news.ycombinator.com/item?id=46482777#46483079
AI-enabled wearables (watch, glass, headphones, pen, pendant) seek to replace mobile phones for a subset of consumer computing. Under normal circumstances, that would be unlikely. But high memory prices may make wearables and ambient computing more "competitive" with personal computing.
One way to outlast siege tactics would be for "old" personal computers to become more valuable than "new" non-personal gadgets, so that ambient computers never achieve mass consumer scale, or the price deflation that powered PC and mobile revolutions.
No, the ToS literally says you cannot.
And Anthropic really goes out their way in banning China. Other model providers do a decent job at restricting access in general but look away when someone tries to circumvent those restrictions. But Claude uses extra mechanisms to make it hard. And the CEO was on record about China issues: https://www.cnbc.com/2025/05/01/nvidia-and-anthropic-clash-o...
If you use Claude models through CURSOR, Anthropic still applies its own policies on usage. Just recently they cut off xAI employees' access to Claude models on Cursor [0]. X has threatened to ban Anthropic from X.
[0]: https://x.com/kyliebytes/status/2009686466746822731?s=46
Good luck catching me, I'm behind 7 proxies.
it's a clear anti-competive clause by a dominant market leader, such clauses tend to be void AFIK
Most people still haven’t heard of Anthropic/Claude.
(For the record, I use Claude code all day long. But it’s still pretty niche outside of programming.)
So the only thing you kind of need to figure out is VPN and ProtonVPN provides free vpn service which does include EU servers access as well
I wonder if Claude Code or these AI services block VPN access though.
If they do, ehh, just buy a EU cheap vps (hetzner my beloved) and call it a day plus you also get a free dev box which can also run your code 24x7 and other factors too.
e: for those downvoting, i would earnestly like to hear your thoughts. i want opencode and similar solutions to win.
Opus might be currently the best model out there, and CC might be the best tool out of the commercial alternatives, but once someone switches to open code + multiple model providers depending on the task, they are going to have difficulty winning them back considering pricing and their locked down ecosystem.
I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.
as a consumer, i do absolutely prefer the latter model - but i don't think that is the position I would want to be in if I were anthropic.
Peak HN comment
You went from a 5 minute signup (and 20-200 bucks per month) to probably weeks of research (or prior experience setting up workstations) and probably days of setup. Also probably a few thousand bucks in hardware.
I mean, that's great, but tech companies are a thing because convenience is a thing.
just wanted to make sure before I sign up for a openAI sub
This tweet reads as nonsense to me
It's quoting:
> This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics. If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.
And the linked tweet says that such integration is against their terms.
The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.
I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.
If this is to only limit knowledge distillation for training new models or people Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict
How would they even detect that you used CC on a competitor? There's surely no ethical reason to not do it, it seems unenforceable.
OpenAI hoovered up everything they could to train their model with zero shits about IP law. But the moment other models learned from theirs they started throwing tantrums.
I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.
We didn't sign.
Compilers don't come with terms that prevent you from building competing compilers. IDEs don't prevent you from writing competing IDEs. If coding agents are supposed to be how we do software engineering from now on, yeah, it's pretty bad.
If they were "standard" terms, then how come no other AI provider imposes them?
The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.
(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)
This kind of thing is pretty standard. Nobody wants to be a vendor on someone else's platform. Anthropic would likely not complain too much about you using z.ai in Claude Code. They would prefer that. They would prefer you use gpt-5.2-high in Claude Code. They would prefer you use llama-4-maverick in Claude Code.
Because regardless of how profitable inference is, if you're not the closest to the user, you're going to lose sooner or later.
Anthropic's cogs is rent of buying x amount of h100s. cost of a marginal query for them is almost zero until the batch fills up and they need a new cluster. So, API clusters are usually built for peak load with low utilization (filled batch) at any given time. Given AI's peak demand is extremely spiky they end up with low utilization numbers for API support.
Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.
Its reasonable for a company to unilaterally decide how they monetize their extra capacity, and its not unjustified to care. You are not purchasing the promise of X tokens with a subscription purchase for that you need api.
I understand what you mean but outright removing the ability for other agents to use the claude code subscription is still really harsh
If telemetry really is a reason (Note: I doubt it is, I think the marketing/lock-ins aspect might matter more but for the sake of discussion, lets assume so that telemetry is in fact the reason)
Then, they could've simply just worked with co-ordination with OpenCode or other agent providers. In fact this is what OpenAI is doing, they recently announced a partnership/collaboration with OpenCode and are actively embracing it in a way. I am sure that OpenCode and other agents could generate telemetry or atleast support such a feature if need be
Telemetry is a reason. And its also the mentioned reason. Marketing is a plausible thing and likely part of the reason too, but lock-in etc. would have meant this would have come way sooner than now. They would not even be offering an API in that case if they really want to lock people in. That is not consistent with other actions.
At the same time, the balance is delicate. if you get too many subs users and not enough API users, then suddenly the setup is not profitable anymore. Because there is less underused capacity available to direct subs users to. This probably explains a part of their stance too, and why they havent done it till now. Openai never allowed it, and now when they do, they will make more changes to the auth setup which claude did not. (This episode tells you how duct taped whole system was at ant. They used the auth key to generate a claude code token, and just used that to hit the API servers).
And if inference is so profitable why is OpenAI losing 100B a year
“….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.”
This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.
The answer is no, the above was a response to a follow up.
An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.
Code can be objectively checked with automated tools to be correct/incorrect.
Art is always subjective.
I mean they could have put _exactly_ that into their terms of service.
Hijacking rate limits is also never really legal AFIK.
OpenCode already does 120% of what CC does.
I don't pay for any AI subscription. I just end up building single file applications but they might not be that good sometimes so I did this experiment where I ask gemini in aistudio or chatgpt or claude and get files and end up just pasting it in opencode and asking it to build the file structure and paste it in and everything
If your project includes setting up something say sveltekit or any boilerplate project and contains many many files I recommend this workflow to get the best of both worlds for essentially free
To be really honest, I just end up mostly creating single page main.go files for my use cases from the website directly and I really love them a lot. Sure the code understandability takes a bit of hit but my projects usually stay around ~600 to 1000 at max 2000 lines and I really love this workflow/ for me personally, its just one of the best.
When I try AI agents, they really end up creating 25 files or 50 files and end up overarchitecting. I use AI for prototypes purposes for the most part and that overarchitecture actually hurts.
Mostly I just create software for my own use cases though. Whenever I face any problem that I find remotely interesting that impacts me, I try to do this and this trend has worked remarkably well for me for 0 dollars spent.
Anthropic blocks third-party use of Claude Code subscriptions
I swear to god everyone is spoiling for a fight because they're bored. All these AI companies have this language to try to prevent people from "distilling" their model into other models. They probably wrote this before even making Claude Code.
Worst case scenario they cancel your account if they really want to, but almost certainly they'll just tweak the language once people point it out.
Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.
But being first doesn't mean you're necessarily the best. Not to mention, they weren't the first anyway (aider was).
Presumably because it costs them more than $200 per month to sell you it. It's a loss leader to get you into their ecosystem. If you won't use their ecosystem, they'd rather you just go over to OpenAI.
they lose money on 200/month plan, maybe even quite a bit. So that plan only exist to subsidize their editor.
Could be about the typical "all must be under our control" power fantasies companies have.
But if there really is "no moat" and open model will be competitive just in a matter of time then having "the" coding editor might be majorly relevant for driving sales. Ironically they seem to already have kind lost that if what some people say about ClaudeCode vs. OpenCode is true...
│ Total │ │ 3,884,609 │ 3,723,258 │ 215,832,272 │ 3,956,313,197 │ 4,179,753,336 │ $3150.99 │
It calculates tokens & public API pricing. But also Anthropic models are generally more expensive than others, so I guess its sort of 'self made' value? Some of it?
1. Pay for a stock photo library and train an image model with it that I then sell.
2. Use a spam detection service, train a model on its output, then sell that model as a competitor.
3. Hire a voice actor to read some copy, train a text to speech model on their voice, then sell that model.
This doesn't mean you can't tell Claude "hey, build me a Claude Code competitor". I don't even think they care about the CLI. It means I can't ask Claude to build things, then train a new LLM based on what Claude built. Claude can't be your training data.
There's an argument to be made that Anthropic didn't obtain their training material in an ethical way so why should you respect their intellectual property? The difference, in my opinion, is that Anthropic didn't agree to a terms of use on their training data. I don't think that makes it right, necessarily, but there's a big difference between "I bought a book, scanned it, learned its facts, then shredded the book" and "I agreed to your ToS then violated it by paying for output that I then used to clone the exact behavior of the service."
No they actually do, basically they provide claude code subscription model for 200$ which is a loss making leader and you can realistically get value of even around 300-400$ per month or even more (close to 1000$) if you were using API
So why do they bear the loss, I hear you ask?
Because it acts as a marketing expensive for them. They get so much free advertising in sense from claude code and claude code is still closed source and they enforce a lot of restrictions which other mention (sometimes even downstream) and I have seen efforts of running other models on top of claude but its not first class citizen and there is still some lock-in
On the other hand, something like opencode is really perfect and has no lock-in and is absolutely goated. Now those guys and others had created a way that they could also use the claude subscription itself via some methods and I think you were able to just use OAuth sign up and that's about it.
Now it all goes great except for anthropic because the only reason Anthropic did this was because they wanted to get marketing campaign/lock-in which OpenCode and others just stopped, thus they came and did this. opencode also prevented any lockins and it has the ability to swap models really easily which many really like and thus removing dependence on claude as a lock-in as well
I really hate this behaviour and I think this is really really monopolistic behaviour from Anthropic when one comes to think about it.
Theo's video might help in this context: https://www.youtube.com/watch?v=gh6aFBnwQj4 (Anthropic just burned so much trust...)
In fact, there's exactly nothing illegal about me replacing what Anthropic is doing with books by me personally reading the books and doing the job of the AI with my meat body (unless I'm quoting the text in a way that's not fair use).
But that's not even what's at issue here. Anthropic is essentially banning the equivalent of people buying all the Stephen King books and using them to start a service that specifically makes books designed to replicate Stephen King writing. Claude being about to talk about Pet Sematary doesn't compete with the sale of Pet Sematary. An LLM trained on Stephen King books with the purpose of creating rip-off Stephen King books arguably does.
Suppose I wrote custom agent which performs tasks for a niche industry, wouldn't it be considered as "building a competing service", because their Service is performing Agentic tasks via Claude Code
Anthropic has just entered the "for laying down and avoiding" category.
Claude code making itself obsolete is banned.
This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.
orochimaaru•3h ago
whimsicalism•3h ago