But they also have shown a weakness by failing to understand why people might want to do this (use their Max membership with OpenCode etc instead).
People aren't using opencode or crush with their Claude Code memberships because they're trying to exploit or overuse tokens or something. That isn't possible.
They do it because Claude Code the tool itself is full of bugs and has performance issues, and OpenCode is of higher quality, has more open (surprise) development, is more responsive to bug fixes, and gives them far more knobs and dials to control how it works.
I use Claude Code quite a bit and there isn't a session that goes by where I don't bump into a sharp edge of some kind. Notorious terminal rendering issues, slow memory leaks, or compaction related bugs that took them 3 months to fix...
Failure to deal with quality issues and listen to customers is hardly a good sign of company culture, leading up to IPO... If they're trying to build a moat... this isn't a strong way to do it.
If you want to own the market and have complete control at the tooling level, you're simply going to have to make a better product. With their mountain of cash and army of engineers at their disposal ... they absolutely could. But they're not.
But to me the appeal of OpenCode is that I can mix and match APIs and local models. I have DeepSeek R1 doing research while KLM is planning and doing code reviews and o4 mini breaking down screenshots into specs while local QWEN is doing the work.
My experience with bugs has also been the exact opposite of what you described.
The best pressure on companies comes from viable alternatives, not from boycotts that leave you without tools altogether.
Archaeologist.dev Made a Big Mistake
If guided by this morality column, Archaeologist should immediately stop using pretty-much anything they are using in their life. There's no company today that doesn't have their hands dirty. The life is a dance between choosing the least bad option, not radically cutting off any sight of "bad".
Claude, ChatGPT, Gemini, and Grok are all more or less on par with each other, or a couple months behind at most. Chinese open models are also not far behind.
There's nothing inherent to these products to make them "sticky". If your tooling is designed for it, you can trivially switch models at any time. Mid-conversation, even. And it just works.
When you have basically equivalent products with no switching cost, you have perfect competition. They are all commodities. And that means: none of them can make a profit. It's a basic law of economics.
If they can't make a profit, no matter how revolutionary the tech is, their valuation is not justified, and they will be in big trouble when people figure this out.
So they need to make the product sticky somehow. So they:
1. Add a subscription payment model. Once you are paying a subscription fee, then the calculus on switching changes: if you only maintain one subscription, you have a strong reason to stick with it for everything.
2. Force you to use their client app, which only talks to their model, so you can't even try other models without changing your whole workflow, which most people won't bother to do.
These are bog standard tactics across the tech industry and beyond for limiting competitive pressure.
Everyone is mad about #2 but honestly I'm more mad about #1. The best thing for consumers would be if all these model providers strictly provided usage-based API pricing, which makes switching easy. But right now the subscription prices offer an enormous discount over API pricing, which just shows how much they are really desperate to create some sort of stickiness. The subscriptions don't even provide the "peace of mind" benefit that Spotify-like subscription models provide, where you don't have to worry about usage, because they still have enforced usage limits that people regularly hit. It's just purely a discount offered for locking yourself in.
But again I can't really be that mad because of course they are doing this, not doing it would be terrible business strategy.
If they're going to close the sub off to other tools, they need to make very strong improvements to the tool. And I don't really see that. It's "fine" but I actually think these tools are letting developers down.
They take over too much. They fail to give good insights into what's happening. They have poor stop/interrupt/correct dynamics. They don't properly incorporate a basic review cycle which is something we demand of junior developers and interns on our teams, but somehow not our AIs?
They're producing mountains of sometimes-good but often unreviewable code and it isn't the "AI"'s fault, it's the heuristics in the tools.
So I want to see innovation here. And I was hoping to see it from Anthropic. But I just saw the opposite.
I myself have been building a special-purpose vibe-coding environment and it's just astounding how easy it is to get great results by trying totally random ideas that are just trivial to implement.
Lots of companies are hoping to win here by creating the tool that everyone uses, but I think that's folly. The more likely outcome is that there are a million niche tools and everyone is using something different. That means nobody ends up with a giant valuation, and open source tools can compete easily. Bad for business, great for users.
I have no idea what JetBrain's financials are like, but I doubt they're raking in huge $$ despite having very good tools & unfortunately their attempts to keep abreast of the AI wave have been middling.
Basically, I need Claude Code with a proper review phase built in. I need it to slow-the-fuck-down and work with me more closely instead of shooting mountains of text at me and making me jam on the escape key over and over (and shout WTF I didn't ask for that!) at least twice a day.
IHMO these are not professional SWE tools right now. I use them on hobby projects but struggle to integrate them into professional day jobs where I have to be responsible in a code review for the output they produced.
And, again, it's not the LLM that's at fault. It's the steering wheel driving it missing a basic non-yeet process flow.
It's irresponsible to your teammates to dump very large giant finished pieces of work on them for review. I try to impress that on my coworkers, and I don't appreciate getting code reviews like that for submission, and feel bad if I did the same.
Even worse if the code review contains blocks of code which the author doesn't even fully understand themselves because it came as one big block from and LLM.
I'll give you an example -- I have a longer term bigger task at work for a new service. I had discussions and initial designs I fed into Claude. "We" came to a concensus and ... it just built it. In one go mainly. It looks fine. That was Friday.
But now I have to go through that and say -- let's now turn this into something reviewable for my teammates. Which means basically learning everything this thing did, and trying to parcel it up into individual commits.
Which is something that the tool should have done for me, and involved me in.
Yes, you can prompt it to do that kind of thing. Plan is part of that, yes. But planning, implement, review in small chunks should be the default way of working, not something I have to force externally on it.
What I'd say is this: these tools right now are are programmer tools, but they're not engineer tools
Well, no. It just means no single player can dominate the field in terms of profits. Anthropic is probably still losing money on subscribers, so other companies "reselling" their offering does them no good. Forcing you to use their TUI at least gives them control of how you interact with the models back. I'm guessing but since they've gone full send into the developer tooling space, their pitch to investors likely highlights the # of users on CC, not their subscriber numbers (which again, lose money). The move makes since in that respect.
I remember the story used to be the other way around - "just a wrapper", "wrapper AI startups" were everywhere, nobody trusted they can make it.
Maybe being "just a model provider" or "just a LLM wrapper" matter less than the context of work. What I mean is that benefits collect not at the model provider, nor at the wrapper provider, but where the usage takes place, who sets the prompts and uses the code gets the lion share of benefits from AI.
Being "just a wrapper" wouldn't be a risky position if the LLMs would be content to be "just a model." But they clearly wouldn't be, and so it wasn't.
It looks like they need to update their FAQ:
Q: Do I need extra AI subscriptions to use OpenCode? A: Not necessarily, OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.
We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.
As with everything else (finance comes to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.
You can use the Anthropic API in any tool, but these users wanted to use the claude code subscription.
To be clear, I’ve seen this sentiment across various comments not just yours, but I just don’t agree with it.
https://builders.ramp.com/post/why-we-built-our-background-a...
Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!
Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”
If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.
“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.
In all seriousness, I really don't think it should be a controversial opinion that if you are using a companies servers for something that they have a right to dictate how and the terms. It is up to the user to determine if that is acceptable or not.
Particularly when there is a subscription involved. You are very clearly paying for "Claude Code" which is very clearly a piece of software connected to an online component. You are not paying for API access or anything along those lines.
Especially when they are not blocking the ability to use the normal API with these tools.
I really don't want to defend any of these AI companies but if I remove the AI part of this and just focus on it being a tool, this seems perfectly fine what they are doing.
1. The company did something the customers did not like.
2. The company's reputation has value.
3. Therefore highlighting the unpopular move online, and throwing shade at the company so to speak, is (alongside with "speaking with your wallet") one of the few levers customers have to push companies to do what they want them to do.
I could write an article and complain about Taco Bell not selling burgers and that is perfectly within my right but that is something they are clearly not interested in doing. So me saying I am not going to give them money until they start selling burgers is a meaningless too them.
Everything I have seen about how they have marketed Claude Code makes it clear that what you are paying for is a tool that is a combination of a client-side app made by them and the server component.
Considering the need to tell the agent that the tool you are using is something it isn't, it is clear that this ever working was not the intention.
It’s CC with Qwen and KLM and other OSS and/or local models.
What's changed is that I thought I was subscribing to use their API services, claude code as a service. They are now pushing it more as using only their specific CLI tool.
As a user, I am surprised, because why should it matter to them whether I open my terminal and start up using `claude code`, `opencode`, `pi`, or any other local client I want to send bits to their server.
Now, having done some work with other clients, I can kind of see the point of this change (to play devils' advocate): their subscription limits likely assume aggregate usage among all users doing X amount of coding, which when used with their own cli tool for coding works especially well with client side and service caching and tool-calls log filtering— something 3rd party clients also do to varying effectivness.
So I can imagine a reason why they might make this change, but again, I thought I was subscribing to a prepaid account where I can use their service within certain session limits, and I see no reason why the cli tool on my laptop would matter then.
This is really the salient point for everything. The models are expensive to train but ultimately worthless if paying customers aren't captive and can switch at will. The issue it that a lot of the recent gains are in the prefill inference, and in the model's RAG, which aren't truly a most (except maybe for Google, if their RAG include Google scholar). That's where the bubble will pop.
While Anthropic was within their right to enforce their ToS, the move has changed my perspective. In the language of moats and lock-ins, it all makes sense, sure, but as a potential sign of the shape of things to come, it has hurt my trust in CC as something I want to build on top of.
Yesterday, I finally installed OpenCode and tried it. It feels genuinely more polished, and the results were satisfactory.
So while this is all very anecdotal, here's what Anthropic accomplished:
1) I no longer feel like evangelizing for their tool 2) I installed a competitor and validated it's as good as others are claiming.
Perhaps I'm overly dramatic, but I can't imagine I'm the only one who has responded this way.
zzzeek•2h ago
what? that's a thing ? why would a vibe coder be "renowned"? I use Claude every day but this is just too much.
hakanderyal•1h ago