But I literally can not cancel. Trying the app says "you signed up on a different platform, go there" but it doesn't tell me which platform that might be.
Trying to cancel on mobile web gives several upgrade options but no cancel options.
So, do I need to call my credit card? This is the worst dark pattern on subscription I have seen of any service I have ever paid for!
Anthropic had a fairly positive image in my head until they cut off my access and are not giving me a way to cancel my plan.
Edit: after mucking with the Stripe credit card payment options I found a cancel plan button underneath the list of all invoices. So there is an option, I just had a harder time finding it then I have had with other services. Successfully cancelled!
As long as you don't cancel, you do owe them money. But if they make cancelling intentionally hard, one would likely have a good case in court to still not pay, if one would want to go to court over this.
Gemini Advanced offered 2.5 Pro with nearly unlimited rate limits, then nerfed it to 100/day.
OpenAI silently nerfed the maximum context window of reasoning models in their Pro plan.
Accompanying the nerf is usually a psy op, like nerfing to 50/day then increasing it to 100/day so the anchoring effect reduces the grievance.
It's a smart ploy because as much as we like to say there's no moat, the user does face provider switching costs (time and effort), which serves as a mini-moat for status quo provider.
So providers have an incentive to rope people in with a loss leader, and then rug pull once they gained market share. Maybe 40% of the top 5% of Claude users are now too accustomed to their Claude-based workflows, and inertia will keep them as customers, but now they're using the more expensive API instead. Anthropic won.
Modern bait and switch, although done intelligently so no laws are broken.
To the degree there is a moat, I do not think it will be effective at keeping people in. I had already been somewhat disillusioned with the AI hype, but now I am also disillusioned with the company who I thought was the best actor in the space. I am happy that there is unlikely to be a dominant single winner like there was for web search or for operating systems. That is, unless there's a significant technological jump, rather than the same gradual improvement that all the AI companies are making.
Likewise: a faulty, unproven, hallucinating, error-prone service, however good, was a good value at approx 25 USD/month in an "absolutely all you can eat", wholesale regime ...
... now? Reputational risk aside, they force their users to appraise their offering in terms of actual value offered, in the market.-
That's a good thing, right?
When a provider gets memory working well, I expect them to use this to be a huge moat - ie. they won't let you migrate the memories, because rather than being human readable words they'll be unintelligible vectors.
I imagine they'll do the same via API so that the network has a memory of all previous requests for the same user.
Hell, “just open a new chat and start over” is an important tool in the toolbox when using these models. I can’t imagine a more frustrating experience than opening a new chat to try something from scratch only for it to reply based on the previous prompt that I messed up.
As if google would say that yes, emails are $5/mo, but there's actually a limit on number of emails daily, and also number of characters in the email. It just feels so illegal to nerf a product that much.
Same with AI companies changing routing and making models dumber from time to time.
Maybe they added a card fee in at the end, but if they didn’t make that abundantly clear, they’ve broken a law in most countries which use the Euro.
Parent clearly stated they only saw "€170+VAT" and not €206.55, so of course they expected to see €206.55 before the purchase went through. Not sure what anyone else would expect?
At the rate the Chinese are going it won't be long before I can shake the dust off my sandals of this bullshit for good.
I still revert to gemini pro 2.5 here and there and claude for specific demanding tasks, but bulk token go trough open weight model at the moment.
Update: below the fold at the bottom of the Billing page is the cancel section and cancel button.
Update 2: just clicked cancel and was offered a promo of 20% off for three months...
Update 3: FYI, I logged in to my Claude account via computer (not iOS or Android).
But it is so hard to explain to product people, that there is a limit how much certain services can scale and be profitably supported.
"The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption."
"It's not rocket science. It’s our way of attracting users. Not bait and switch, but credits to try."
It was never unlimited.
They never advertised unlimited usage. The Max plan clearly said it had higher limits.
This fabrication of a backstory is so weird. Why do so many people believe this?
- note: "unlimited" does not mean free.
quote source: "Apple Just Found a Way to Sell You Nothing" https://www.youtube.com/watch?v=ytkk5NFZGjs
Don't blame the company, it acts within boundaries allowed by its paying customers, and apple customers are known to be... much less critical of the company and its products to be polite, especially given its premium prices.
This is patently false and has been for the whole existence of Apple. Apple customers are voraciously critical of the company. Just probably not under the delta of importance that you consider.
https://www.forbes.com/sites/barrycollins/2024/11/28/mac-own...
Repairs have always come with deductibles.
This is standard in virtually every insurance program. There are a lot of studies showing that even the tiniest amount of cost sharing completely changes how people use a service.
When something is unlimited and free, it enticed people to abuse it in absurd ways. With hardware, you would get people intentionally damaging their gear to get new versions for free because they know it costs them nothing.
Unlimited for startups work better because they have zero idea on load challenges that come in the future. And they don’t have much idea how well their product will be taken in the market.
Anthropic got the experience and decided they needed to maximize on reasonableness over customer trust. And they are a startup so we all get this.
OTOH there is no such thing as unlimited. Atoms in the universe are finite. Your use is finite. Your time is finite. Your abuse is limited and finite. You are a sucker for believing in the unlimited myth just like think others are suckers for believing in divine intervention or conspiracy theorists are suckers to believe in unlimited power.
Philosophical meandering and blaming the customer for not understanding company's shady marketing is not something I'd consider to be cool.
One thing I miss for the other users, i.e. the casual users that never use anywhere near of their quota, is rollover. If you haven't used your quota this month, the unused will roll over to the next month.
In the same way your next-door supermarket has effectively "infinite soup cans" for the needs of most people.
I thought I had a low usage with my 1.5 years' worth saved. Only reason I paysfor that plan is anything lower and my provider does not offer rollover.
Eg here in slovenia, if you want unlimited calls and texting, you get 150GB in your "package" for 9.99eur, but you somehow can't save that data for the next month.
https://www.hot.si/ponudba/paketi.html (not affiliated)
This is somewhat a different issue that’s largely accepted by courts and society bar that one neighbour who is incensed they can’t run a rack off their home internet that was marketed unlimited.
Even better: provide a counter displaying both remaining usage available and the quota reset time.
But companies probably earn so much money from the vast majority of users that having good and clear limits would only empower them to actually benefit as much from the product as they can.
Every company wants the marketing of unlimited, but none of them want the accountability.
The AI models have a bunch of different consumption models aimed at different types of use. I work at a huge company, and we’re experimenting with different ways of using LLMs for users based on different compliance and business needs. The people using all you can eat products like NotebookLM, Gemini, ChatGPT use them much more on average and do more varied tasks. There is a significant gap between low/normal/high users.
People using an interface to a metered API, which offers a defined LLM experience consume fewer resources and perform more narrowly scoped tasks.
The cost is similar and satisfaction is about the same.
There is no such thing as "unlimited" or "lifetime" unless it's self-hosted.
Or the American Airlines lifetime pass.. https://www.aerotime.aero/articles/american-airlines-unlimit...
Nothing in our world is truly unlimited. Digital services and assets have different costs than their physical counterparts, but that just means different limits, not a lack of them. Electrical supply, compute capacity, and storage are all physical things with real world limits to how much they can do.
These realities eventually manifest when someone tries to build an "unlimited" service on top of limited components, similar to how you can't build a service with 99.999% reliability when it has a critical piece that can only get to 99.9%.
yep
>Adverse selection has been discussed for life insurance since the 1860s,[3] and the phrase has been used since the 1870s.[4]
In some cases, people discover creative ways to resell the service. Anthropic mentioned they suspect this was happening.
The weirdest part about this whole internet uproar, though, is that Anthropic never offered unlimited usage. It was always advertised as higher limits.
Yet all the comment threads about it are convinced it was unlimited and now it’s not. It’s weird how the internet will wrap a narrative around a story like this.
When you order the second plate, it comes without the sauce and it tastes flatter. You're filled at this point and you can't order the third.
Very creative and fun if you ask me. I was prepared for this though, because the people we went together said how it's going to go, exactly.
There is a case to be made that they sold a multiple and are changing x or rate limiting x differently, but the tone seems different from that.
no, even their announcement blog[0] said:
> With up to 20x higher usage limits
in the third paragraph.
They appear to have removed reference to this 50-session cap in their usage documents. (https://gist.github.com/eonist/5ac2fd483cf91a6e6e5ef33cfbd1e...)
So even if these mystery people Anthropic reference who did run it "in the background, 24/7", they still would've had to stay within usage limits.
It always had limits and those limits were not specified as concrete numbers.
It’s amazing how much of the internet outrage is based on the idea that it was unlimited and now it’s not. The main HN thread yesterday was full of comments complaining about losing unlimited access.
It’s so weird to watch people get angry about thinking they’re losing something they never had. Even Anthropic said less than 5% of accounts would even notice the new limits, yet I’ve seen countless comments raging that “everyone must suffer” due to the actions of a few abusing the system.
Can you really ever compete when you are renting someone else's GPUs?
Can you really ever compete when you are going up against custom silicon built and deployed at scale to run inference at scale (i.e. TPUs built to run Gemini and deployed by the tens-of-thousands in data centers around the globe)?
Meta and Google have deep pockets and massive existing world-class infrastructure (at least for Google, Meta probably runs their php Facebook thing on a few VPS dotted around in some random colos /s ) . They've literally written the book on this.
It remains to be seen how much more money OpenAI can burn, but we've started to see how much Anthropic can burn if nothing else.
So, not unlimited? Like, if the abuse is separate from amount of use (like reselling; it can be against ToS to resell it even in tiny amounts) then sure, but if you're claiming "excessive" use is "abuse", then it is by any reasonable definition not unlimited.
Correct, not “unlimited” as in the dictionary definition of unlimited. Unlimited as in the plain meaning of unlimited as it is commonly used this subject matter area. i.e., Use it reasonably or hit the bricks, pal.
If there is a clear limit to that (and it seems there is now), then stop saying "unlimited" and start selling "X queries per day". You can even let users pay for aditional queries if needed.
(yes i know queries is not a proper term to use here, but the principle stands)
Ugh, anyone who says that and really believes it can no longer see common sense through the hype goggles.
It's just stupid and completely 100% wrong, like saying all musicians will use autotune in the future because it makes the music better.
It's the same as betting that there will be no new inventions, no new art, no works of genius unless the creator is taking vitamin C pills.
It's one of the most un-serious claims I can imagine making. It automatically marks the speaker as a clown divorced from basic facts about human ability
And AI already excels at building those sorts of things faster and with cleaner code. I’ve never once seen a model generate code that’s as ugly and unreadable as a lot of the low quality code I’ve seen in my career (especially from Salesforce “devs” for example)
And even the ones that do the more creative problem solving can benefit from AI agents helping with research, documentation, data migration scripts, etc.
Yet the blanket statement is that I will fail and be replaced, and in fact that people like me don't exist!
So heck yeah I'll come clap back on that.
There is absolutely something real here, whether you choose to believe it or not. I'd recommend taking a good faith and open minded look at the last few months of developments. See where it can benefit you (and where it still falls way short).
So even if you may have arrived at your conclusion years ago, I assure you that things continue to improve by the week. You will be pleasantly surprised. This is not all or nothing, nor does it have to be.
Code is like law. The goal isn't to have a lot of it. Come to me when by "more productive" you actually mean that the person using the LLM deleted more lines of code than anyone else around them while preserving the stability and power of the system
PS nobody wants to come to you.
Why are you so bitter?
You aren't going to sell any snake oil with this venomous strategy.
I get cheesed off cause the AI people disrespect hard work and try to devalue its rewards, and that message is just toxic to anyone trying to learn. You can't master a musical instrument by paying an assistant to practice it for you.
So are musicians. We think of them as doing creative stuff but a vast majority is mundane.
(though who knows, maybe at some time in the future there will be significant numbers of people programming as a hobby and wanting to be coached by a human...)
but you're right that "I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not." could have multiple other readings too.
Let's be fair - I made it intentionally a little provocative :)
What I might not have mentioned is that I've spent the last 5 years and 20,000 or so hours building an IDE from scratch. Not a fork of VSCode, mind you, but the real deal: a new "kernel" and integration layer with abilities that VSCode and its forks can't even dream of. It's a proper race and I'm about to drop the hammer on you.
If the user can't feel any difference in quality between human made software, or AI made software, then it does not mater. It is that easy.
If AI makes better software, at lower prices, human developers will become obsolete. It is the natural way of life.
Once we had telephone operators, now we don't. Once you had to be a good tradesman with good knowledge in how to use a mallet/hammer, axe, chisel, etc. to build a house - now you don't. You don't get awarded for being old-fashioned.
But that doesn't change the fact that the VAST majority of people are just fine with mass-produced furniture
I think this is the difference that's going to happen to software.
There's the one doing everything in bare vim with zero assist, just rawdogging function names and library methods from rote memory.
And then there's the rest who use all the tools at their disposal to solve problems. Is the work super clean, efficient and fully hand-crafted? Nope. Does it solve the problem? Yes it does, and fast.
There may very well be a texture to “hand crafted” software that will be totally lost on users. But I kind of doubt the difference will be anything like the difference in furniture.
When you finally give it a go you will feel stupid for having this opinion.
I did.
In any case there are better and stronger arguments against LLMs than this.
This will be another abstraction layer that MANY people will use and be able to accomplish things that would have been impossible to do in a reasonable amount of time in machine code.
And for a few decades at least it was true. The technology was shite compared to film photography for a long time. The same will probably be true for AI, as full developer replacement will require AGI.
Now very few laundry doers measure out their detergent by hand.
Current gen of LLMs are tools. Use them or not. Judging others based on whether they use tools at all vs how they use them is… naive.
Just as the developer who refused to adopt version control, IDEs, or Stack Overflow eventually became unemployable, those who reject tools that fundamentally expand their problem-solving capacity will find themselves unable to compete with those who can architect solutions across larger possibility spaces on smaller teams.
Will it be used for absolutely every problem? No - There are clearly places where humans are needed.
But rejecting the enormous impact this will have on the workforce is trading hype goggles for a bucket of sand.
I don't think you should use LLMs for something you can't master without.
> will find themselves unable to compete
I'd wait a bit more before concluding so affirmatively. The AI bubble would very much like us to believe this, but we don't yet know very well the long term effects of using LLMs on code, both for the project and for the developer, and we don't even know how available and in which conditions the LLMs will be in a few months as evidenced by this HN post. That's not a very solid basis to build on.
I agree, but it's not mine.
To your second point -- With as much capital as is going into data center buildout, the increasing availability of local coding LLMs that near the performance of today's closed models, and the continued innovation on both open/closed models, you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I think we simply don't have similar mental models for predicting the future.
We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance [1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.
What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.
> With as much capital as is going into
Yes, we are in a bubble. And some are predicting it will burst.
> the continued innovation
That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.
> you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.
I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.
But that's not my strongest reason to avoid the LLMs anyway:
- I don't want to increase my reliance on SaaS (or very costly hardware)
- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).
[1] https://www.sciencedirect.com/science/article/pii/S016649722...
AI presently has a far lower footprint on the globe than the meat industry -- The US Beef industry alone far outpaces the impact of AI.
As far as "work pillaging" - There is cognitive dissonance in supporting the freedom of information/cultural progress and simultaneously desiring to restrict a transformative use (as it has been deemed by multiple US judges) of that information.
We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
That's quite uncharitable.
I don't need to use it to make these points. While I might show a lack of perspective, I don't need to do X to reasonably think X can be bad. You can replace X with all sorts of horrible things, I'll let the creativity of the readers fill in the gap.
> AI presently has a far lower footprint on the globe than [X]
We see the same kind of arguments for planes, cars, anything with a big impact really. It still has a huge (and growing) environmental impact, and the question is do the advantages outweigh the drawbacks?
For instance, if a video call tool allowed you to have a meeting without taking a plane, the video call tool had a positive impact. But then there's also the ripple effect: if without the tool, the meeting hadn't happened at all, the positive impact is less clear. And/or if the meeting was about burning huge amounts of fuel, the positive impact is even less clear, just like LLMs might just allow us to produce attention-seeking, energy-greedy shitty software at a faster speed (if they indeed work well in the long run).
And while I can see how things like ML can help (predicting weather, etc), I'm more skeptical about LLMs.
And I'm all for stopping the meat disaster as well.
> We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
Yep :-)
I'm not sure, I frequently use LLMs for well-scoped math-heavy functions (mostly for game development) where I don't neccessarly understand what's going on inside the function, but I know what output I expect given some inputs, so it's easy for me to kind of blackbox test it with unit tests and iterate on the "magic" inside with an LLM.
I guess if I really stopped and focused on math for a year or two I'd be able to code that myself too, but every time I tried to get deeper into math it's either way too complex for me to feel like it's time well spent, and it's also boring. So why bother?
I didn't have such cases in mind, was replying to the "navigate complexity at scales human cognition wasn't designed for" aspect.
Human cognition wasn't designed to make rockets or AIs, but we went to the moon and the LLMs are here. Thinking and working and building communities and philosophies and trust and math and computation and educational institutions and laws and even Sci Fi shows is how we do
We also killed quite a few astronauts.
But the loss of their lives also proves a point: that achievement isn't a function of intelligence but of many more factors like people willing to risk and to give their lives to make something important happen in the world. Loss itself drives innovation and resolve. For evidence, look to Gene Kranz: https://wdhb.com/wp-content/uploads/2021/05/Kranz-Dictum.pdf
https://en.wikipedia.org/wiki/Rogers_Commission_Report#Flawe...
> Loss itself drives innovation and resolve
True, but did NASA in 1986 really need to learn this lesson?
This isn't (just) rocket science, it's the fundamentals of risk liability, legality and process that should be well established in a (quasi-military) agency such as this.
They knew they were taking some gambles to try to catch up in the Space Race. The urgency that justified those gambles was the Cold War.
People have a social tendency to become complacent about catastrophic risks when there hasn't been a catastrophe recently. There's a natural pressure to "stay chill" when the people around you have decided to do so. Speaking out about risk is scary unless there's a culture of people encouraging other to speak out and taking the risks seriously because they all remember how bad things can be if they don't.
Someone actually has to stand up and say "if something is wrong I really actually want to and need to know." And the people hearing that message have to believe it, because usually it is said in a way that it is not believed.
The use cases of these GPT tools are extremely limited. They demo well and are quite useful for highly documented workflows (E.G. they are very good at creating basic HTML/JS layouts and functionality).
However, even the most advanced GPT tools fall flat on their face when you start working with any sort of bleeding edge, or even just less-ubiquitous technology.
The Godot engine is an open-source project that has matured significantly since GPT tools hit the market.
The GPTs don't know what the new Godot features are, and there is a training gap that I'm not sure Open AI and their competitors will ever be able to overcome.
Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
With Claude I can even write IC10 code. (with a bit of help and understanding of how Claude works)
IC10 is a fictional, MIPS-like CPU in the game Stationeers. So that's pretty promising for most other things.
I just use "AI" instead of Google/SO when I need to find something out.
So far it mostly answers correctly, until the truthful answer comes close to "you can't do that". Then it goes off the rails and makes up shit. As a bonus, it seems to confuse related but less popular topics and mixes them up. Specific example, it mixes couchdb and couchbase when I ask about features.
The worst part is 'correctly' means 'it will work but it will be tutorial level crap'. Sometimes that's okay, sometimes it isn't.
So it's not that it doesn't work for my flow, it's that I can't trust it without verifying everything so what flow?
Edit: there's a codebase that i would love to try an "AI" on... if i wouldn't have to send my customer's code to $random_server with $random_security with $untrustable_promises_of_privacy. Considering how these "AI"s have been trained, I'm sure any promise that my code stays private is worth less than used toilet paper.
Gut feeling is the "AI" would be useless because it's a not-invented-here codebase with no discussion on StackOverflow.
This passage forces me to concluse that this comment is sarcasm. Neither IDEs nor the use of Stack Overflow is anywhere near a requirement for being a professional programmer. Surely you realize there are people out there who are happily employed while still using stock Vim or Emacs? Surely you realize there are people out there who solve problems simply by reading the docs and thinking deeply rather than asking SO?
The usage of LLM assistance will not become a requirement for employment, at least not for talented programmers. A company gating on the use of LLMs would be preposterously self-defeating.
I see people starting to unlearn working by themselves rapidly and becoming dependant on GPT, making themselves quite useless in the process. They no longer understand what they're working with and need the help from the tool to work. They're also entirely helpless when whatever 'AI' tool they use can't fix their problem.
This makes them both more replaceable and less marketable than before.
It will have and already has a huge impact. But it's kinda like the offshoring hype from a decade ago. Everyone moved their dev departments to a cheaper country, only to later realize that maybe cheap does not always mean better or even good. And it comes with a short term gain and a long term loss.
Nobody knows how this will play out yet. Reality does not care about your feelings, unfortunately.
But on the other hand there is the other end who think AGI coming in a few months and LLMs are omniscient knowledge machines.
There is a sweet spot in the middle.
But the big thing is using AI to learn new things, explain some tricky math in a paper I am reading, help brain storm, etc. The value of AI is in improving ourselves.
To me this seems to be the single most valuable use case of newer "AI tools"
> generating a Bash shell script quickly
I do this very often, and to me this seems to me the second most valuable use case of newer "AI tools"
> The value of AI is in improving ourselves
I agree completely.
> help brain storm
This strikes me as very concerning. In my experience, AI brainstorming ideas are exceptionally dull and uninspired. People who have shared ideas from AI brainstorming sessions with me have OVERWHELMINGLY come across as AI brained dullards who are unable to think for themselves.
What I'm trying to say is that Chat GPT and similar tools are much better suited for interacting with closed systems with strict logical constraints, than they are for idea generation or writing in a natural language.
Really, it is like students using AI: some are lazy and expect it to do all the work, some just use it as a tool as appropriate. Hopefully I am not misunderstanding you and others here, but I think you are mainly complaining about lazy use of AI.
*: I'm aware of cases like the recent ffmpg assembly usage that gave a big performance boost. When talking about industrial trend lines, I'm OK with admitting 0.001% exceptions.
(Apologies if it comes across as snarky or pat, but I honestly think the comparison is reasonable.)
But... what else? These things are rare. It’s not like there’s a new thing that comes along every few years and we all have to jump on or be left behind, and LLMs are the latest. There’s definitely a new thing that comes along every few years and people say we have to jump on or be left behind, but it almost never bears out. Many of those ended up being useful, but not essential.
I see no indication that LLMs or associated tooling are going to be like compilers and version control where you pretty much can’t find anyone making a living in the field without them. I can see them being like IDEs or debuggers or linters where they can be handy but plenty of people do fine without them.
Where would you put the peak? Fortran was invented in the 50’s. The total population of programmers was tiny back then…
Are you aware compilers are deterministic most of the time?
If a compiler had a 10% chance of erasing your code instead of generating an executable you'd see more people still using assembly.
The basic nature of my job is to maintain the tallest tower of complexity I can without it falling over, so I need to take complexity and find ways to confine it to places where I have some way of knowing that it can't hurt me. LLMs just don't do that. A leaky abstraction is just a layer of indirection, while a true abstraction (like a properly implemented high-level language) is among the most valuable things in CS. Programming is theory-building!
Even if things are going the direction you say, though, Kilo is still just a fork of VSCode. Lipstick on a pig, perhaps. I would bet that I know the strengths and weaknesses of your architecture quite a lot better than anyone on the Kilo team because the price of admission for you is not questioning any of VSCode's decisions, while I consider all of them worthy of questioning and have done so at great length in the process of building something from scratch that your team bypassed.
a developer using AI in a low-cost region will replace any developer in a high cost region ;)
I believe that at some point, AI will get good enough that most companies will eventually stop hiring someone that doesn’t utilize AI. Because most companies are just making crud (pun intended). It’ll be like specialized programming languages. Some will exist, and they may get paid a lot more, but most people won’t fall into that category. As much as we like to puff ourselves up, our profession isn’t really that hard. There are a relative handful of people doing some really cool, novel things. Some larger number doing some cool things that aren’t really novel, just done very nicely. And the majority of programmers are the rest of us. We are not special.
What I don’t know is the timing. I don’t expect it to be within 5 years (though I think it will _start_ in that time), but I do expect it within my career.
> Stop selling "unlimited", when you mean "until we change our minds"
The limits don't go in to affect until August 28th, one month from yesterday. Is there an option to buy the Max plan yearly up front? I honestly don't know; I'm on the monthly plan. If there isn't a yearly purchase option, no one is buying unlimited and then getting bait-and-switched without enough time for them to cancel their sub if they don't like the new limits.
> A Different Approach: More AI for Less Money
I think it's really funny that the "different approach" is a limited time offer for credits that expire.
I don't like that the Claude Max limits are opaque, but if I really need pay-per-use, I can always switch to the API. And I'd bet I still get >$200 in API-equivalents from Claude Code once the limits are in place. If not? I'll happily switch somewhere else.
And on the "happily switch somewhere else", I find the "build user dependency" point pretty funny. Yes, I have a few hooks and subagents defined for Claude Code, but I have zero hard dependency on anything Anthropic produces. If another model/tool comes out tomorrow that's better than Claude Code for what I do, I'm jumping ship without a second thought.
The field is moving so fast that whatever was best 6 months ago is completely outdated.
And what is top tier today, might be trash in a few months.
wait until their investors get fed up with pouring money down the drain and demand they make a profit from the median user
that model training and capex to build the giant DCs and fill them with absurdly priced nvidia chips isn't free
as an end user: you will be the one paying for it
“Claude is promising unlimited and it isn’t sustainable.”
“For a limited time only pay us $20 and get $80 worth of credits.”
Look at what these people on HN said!
Come on.
When some users burn massive amounts of compute just to climb leaderboards or farm karma, it’s not hard to imagine why providers might respond with tighter limits—not because it's ideal, but because that kind of behavior makes platforms harder to sustain and less accessible for everyone else. Because on the other hand a lot of genuine customers are canceling because they get API overload message after paying $200.
I still think caps are frustrating and often too blunt, but posts like that make it easier to see where the pressure might be coming from.
[1] https://www.reddit.com/r/ClaudeAI/comments/1lqrbnc/you_deser...
Surely they thought about 'bad users' when they released this product. They can't be that naive.
Now that they have captured developer mindshare. users are bad.
what was the bait and switch? where in the launch announcement (https://www.anthropic.com/news/max-plan) did they suggest it provided unlimited inference?
why is anthropic tweeting about 'naughty users that ruined it for everyone' ?
they launched Claude Max (and Pro) as being limited. it was limited before, and it's limited now, with a new limit to discourage 24/7 maxing of it.
in what way was there a bait and switch?
Switch: "We limited your usage weekly and monthly. You don't know how those limits were set, we do but that's not information you need to know. However instead of choosing to hoard your usage out of fear of hitting the dreaded limit again, you've kept it again and again, using the product exactly the way it was intended to and now look what you've done."
_________ *here's where we tell you the limits we said we didn't have
And how does this compare to case with "Unlimited". Overall will the total used be higher or lower?
The transparency problem compounds this. The sustainable path forward likely involves either much more transparent/clear usage-based pricing or significantly higher flat rates that actually cover heavy usage.
How am I supposed to bait people into my product to screw them up then?
Find a better alternative.
And to be clear, the users abusing the "unlimited" rates they were offering to do absolutely nothing productive (see vibe-coding subreddits) are no better.
Gemini did go from a huge free tier to 100 free uses a day, but I expected that.
EDIT: let me clarify: I just retired after over 50 very happy years working as a software developer and researcher. My number one priority was always self-improvement: learning new things and new skills that incidentally I could sometimes use to make money for whoever was paying me. AI is awesome for learning and general intellectual pursuits, and pairs nicely with reading quality books, listening to lectures on YouTube, etc.
What's the alternative when every other vendors (eventually) have the same limit?
When companies sell unlimited plans, they’re making a bet that the average usage across all of those plans will be low enough to turn a profit.
These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
Because Claude Code is absolutely impossible to use without a subscription? I’m fine with being limited, but I’m not with having to pay more than $200/month
Anybody that feels they’re not getting enough out of their subscription is welcome to use API instead.
Claude Code accepts an API key. You do not need a subscription
https://docs.anthropic.com/en/docs/claude-code/settings#envi...
I would not personally, as I can't spend thousands per month on an agentic tool. I hope they figure out limits that work. $100 / $200 is still a great deal. And the predictability means my company will pay for it.
Unlimited plans encourage wasting resources[0]. By actually paying for what you use, you can be a bit more economical and still get a lot of mileage out of it.
$100/$200 is still a great deal (as you said), but it does make sense for actually-$2000 users to get charged differently.
0: In my hometown, (some) people have unlimited central heating (in winter) for a fixed fee. On warmer days, people are known to open windows instead of turning off the heating. It's free, who cares...
Anthropic never sold an unlimited plan
It’s amazing that so many people think there was an unlimited plan. There was not an unlimited plan.
> These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
Correct! And they did. And now Anthropic is changing those limits in a month.
> LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
This exists. You use the API. It has always been an option. Again, I’m confused about why there’s so much anger about something that already exists.
The subscriptions are nice for people who want a consistent fee and they get the advantage of a better deal for occasional heavy usage.
I'm told the $200/month plan was practically unlimited, I heard you could leave ~10 instances of Claude Code running 24/7. I will never pay for any of these subscriptions however so I haven't verified that.
>And now Anthropic is changing those limits in a month.
Which indicates the seller was being scammed. Now they're changing the limits so it swings back to being a scam for the user.
>I’m confused about why there’s so much anger about something that already exists
Yes but much LLM tooling requires a subscription. I'm not talking only about Anthropic/Claude Code. I can't use chatgpt.com using my own API key. Even though behind the scenes, if I had a subscription, it would be calling out to the exact same API.
Let people cook and give them some time find out how to do this. Voice discontent but don't be an asshole.
Otherwise it’s just a change in the offering. You can unsubscribe freely.
This sort of entitlement puts me off. Prices for things change all of the time.
you can see here in this Reddit thread from April, when Claude Max was launched, that it was explicitly explained as being limited: https://www.reddit.com/r/ClaudeAI/comments/1jvbpek/breaking_...
Max is described as "5-20x more than Pro", clearly indicating both are limited.
here's their launch blog post: https://www.anthropic.com/news/max-plan
> The new Max plan delivers exactly that. With up to 20x higher usage limits, you can maintain momentum on your most demanding projects with little disruption.
obviously everyone wants everything for free, or cheap, and no one wants prices to change in a way that might not benefit them, but the endless whinging from people about how unfair it is that anthropic is limiting access to their products sold as coming with limited access is really extremely tedious even by HN standards.
and as pointed out dozens of times in these threads, if your actual worry is running out of usage in a week or month, Anthropic has you covered - you can just pay per token by giving Claude Code an API key. doing that 24/7 will cost ~100x what Max does though, I wonder if that's a useful bit of info about the situation or not?
Claude Max, to my knowledge, was never marketed as "unlimited". Claude Max gives you WAY more tokens then $100/$200 would buy. When you get rate limited, you have the option to just use the API. Overall, you will have gotten more value than just using the API alone.
And you always had, and continue to have, the option of just using the API directly. Go nuts.
The author sounds like a petulant child. It's embarrassing, honestly.
Differentiation through honesty: In a market full of fluff, directness stands out. Customers might respect a brand more for telling the truth plainly, even if the truth isn’t ideal.
The risk: It could scare off some customers who don’t read the fine print anyway. But that may not be a loss—it might actually filter in the right kind of customer, the one who wants to know what they’re really getting.
> The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption.
> When developers get "rate limit exceeded" while debugging at 2 AM, they're not thinking about your infrastructure costs—they're shopping for alternatives.
Notice a pattern here?
In subscription plans, users who aren't using 100% of their subscription subsidize other users, which is opaque and not really fair.
But hey, this is just a sales pitch from one company I wouldn't trust by taking a dump on another company I wouldn't trust.
some say they have to define a huge limit and that's it.
Limits are sometime hard to define :
- they must be such huge, a user (human) finaly understand it's unlimited else he will compare to competitors
- but no such huge because the 0.1% of users will try to reach it
A fair word could be the one which categorize the type of use :
- human : human has physical limit (e.g: typing word to keyboard / per time).
- bot : from 1 arduino to heavyweight hardcore clusting, virtualy no limits.
I hold them no ill will for rapidly changing pricing models, raising pricing, doing whatever they need to do in what must be a crazy time of finding insane PMF in such a short time
BUT the communication is basically inexcusable IMO. I don't know what I'm paying for, I don't know how much I get, their pricing and product pages have completely different information, they completely hide the fact that Opus use is restricted to the Max plan, they don't tell you how much Opus use you get, their help pages and pricing pages look they were written by an intern and pushed directly to prod. I find out about changes on Twitter/HN before I hear about them from Anthropic.
I love the Claude Code product, but Anthropic the company is definitely nudging me to go back to OpenAI.
This is also why competition is great though - if one company had a monopoly the pricing and UX would be 20x worse.
> You can tell how it's intentional with both OpenAI and Anthropic by how they're intentionally made opaque. I cant see a nice little bar with how much I've used versus have left on the given rate limits
Well, that's a scam. A legal one.
Anthropic never sold Max plans as unlimited. There are two tiers, explicitly labeled "5x" and "20x", both referring to the increased usage over what you get with Pro. Did all the people complaining that Anthropic reneged on their "promise" of unlimited usage not read anything about what they were signing up to pay $100 or $200/month for? Or are they not even customers?
why the whole users need to suffer???
I don’t know why you think everyone is going to suffer.
they just sells you at 20x more usage limit???? nothing tells me unlimited
Just below me as I type there's a comment saying they're refusing to cancel a subscription (may not be below me any more when I finish typing).
Somewhere lower there's a comment saying they do not show the full price when you subscribe, but add taxes on top of it and leave you to notice the surprise on your credit card statement.
Is there an ethical "AI" service anywhere?
On a more serious note, I'm sure most of the people can't fathom or even think about the resources they are consuming when using AI tools. This things doesn't use energy, they consume it like how a black hole sucks light.
In some cases, your queries can consume your home's daily energy needs in a hour or so.
For Claude Code and similar services, we’re still in the very early stages of the market. We’re using AI almost for free right now. It’s clear this isn’t sustainable. The problem is that they couldn’t even sustain it at this earliest stage.
Bluestein•8h ago
zwnow•8h ago
Bluestein•7h ago
Was trying to "analogize" to "unlimited".-
johnisgood•7h ago
Thoughts? If you want me to, I can elaborate on what I really mean, but I hope it was understandable enough.
zwnow•7h ago
johnisgood•6h ago
blitzar•7h ago
https://www.youtube.com/watch?v=NOX2C1UMxL0