frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

GitHub Copilot is moving to usage-based billing

https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/
174•frizlab•1h ago

Comments

_pdp_•1h ago
There is noticeable trend across all agentic coding platforms that this situation is no longer sustainable.

With this kind of pricing (sonnet 4.6 has 9x multiplier, previously 1x) it begs the question why use Copilot to begin with.

You could easily just buy the tokens directly and have a lot more choice as well.

bsdz•1h ago
Doesn't GitHub get volume discounting they can pass on to their Copilot customers?
infecto•1h ago
Looking at their pricing it does not look the case.
minimaxir•1h ago
Economics of scale don't work when scale still isn't enough and capacity is still limited.

GitHub has the full power of Azure with their hosted models but it's not being passed to consumers.

_pdp_•1h ago
It seems to me more expensive but I might be reading it wrong.
sottol•1h ago
One reason I used it was that I wasn't locked into a single provider and switching them was as easy as changing a drop-down. Small feature? Sonnet or GPT5.4/mini? Large changes? Opus. And why not see how good Raptor Mini does this one refactor?

It also helped build an intuition of what wach model could do and which parts it was weaker at because you could try them almost side by side, especially if one model's output wasn't great.

That said, these were all side projects so nothing truly consequential. Otoh, you might leave some extra perf on the table but I found the models worked quite with the Copilot harness.

Waterluvian•51m ago
Yeah, this is a very useful abstraction layer. The entire concept of separating the model creator from the model runner is good for competition and is customer friendly. Which means they likely hate the concept and want to kill it.

Gosh, imagine getting to do that with your TV/Streaming subscription. Getting to pay one fee to access some set number of hours per month from any of the providers.

Incipient•39m ago
The problem is I can't afford the tokens! Even on my $10/mo plan, running either 100 opus, or 300 sonnet agent runs would cost hundreds of dollars - well above my budget!
my002•1h ago
The era of subsidised inference is truly ending. The new model multipliers (https://docs.github.com/en/copilot/reference/copilot-billing...) seem like a huge leap, though. From 1x to 6x for new-ish GPT and Sonnet models. 27x for Opus...

Seems like folks would be better off with OpenRouter instead.

ItsClo688•1h ago
27x for Opus is genuinely shocking. at that point you're not paying for convenience anymore, you're just paying a GitHub tax. OpenRouter or direct API makes way more sense unless you're really glued to the IDE integration.
thrdbndndn•1h ago
I keep seeing people mention OpenRouter.

Does it effectively bypass regional restrictions for you, so you can use something like the Claude API from unsupported regions such as Hong Kong, or does it still enforce the official providers' geo-restrictions?

rvnx•1h ago
OpenRouter is great for budget control, but as they are indirect APIs, your experience with cached tokens may vary, eventually costing much more than in direct depending on the providers.

You can pay with crypto though, which seems to be convenient for people under sanctions or with limited access, or if you are in low-tax jurisdiction (e.g. HK)

jauntywundrkind•43m ago
Caching is advertised per model+provider.

That said I think few people using openrouter are actually being selective about providers.

It took half a day to get my opencode setup, was not friendly. A lot of manually cross referencing model and providers. I was actually mainly optimizing for relatively fast providers. It all is super fragile and I'm sure half out of date; I have no idea if these picks are still fast, no promises they are still the same price (pretty terrifying honestly).

I'm mostly on coding plans so it doesn't super affect me. But man is it a bother to maintain.

minimaxir•1h ago
What's annoying is that it's obvious. In the case of GPT 5.5, if Copilot is going to charge 7.5x what GPT 5.4 costs while OpenAI themselves via the API/Codex only charges 2x of what GPT 5.4 costs, that will immediately raise an eyebrow.
boothby•52m ago
To anybody who's been watching the tech sector with a critical eye for pretty much any period from the late 90s and onward, this is just the enshittification process. For most of OpenAI's existence it's been obvious, to me, that investors were burning insane levels of capital to build the market, and now that folks are locked in, you're seeing higher fees, ads, etc. Yet again, the user is the product; the investors want to siphon your data, attention and once you're hooked, money. And for companies like Microsoft and Apple, those hooks can dig deep.
Incipient•45m ago
I'd call it a straight up "bait and switch".
Gagarin1917•6m ago
“Enshitification” is just when unsustainable subsidies end?

Another reason to hate that word.

From a different perspective, you were granted an incredible gift from the companies who let you use their product on their dime. Hopefully you made the most of it when you had the opportunity.

specproc•1h ago
Yeah, totally. The recent pricing changes have just made my Copilot subscription go from great deal to awful value over night.

I've been wanting to get off MS more generally and this is good motivation. Will be playing round with OR this week.

cedws•32m ago
Just be aware OpenRouter charges a 5.5% fee, I didn’t know until recently. I like the product, and I think the fee is fair, but if you want the absolute best pricing then go direct.
nacs•1h ago
Even Sonnet 4.6 is 9x multiplier (previously 1x)!

The only model I even used on Copilot was Sonnet and now its got a ridiculous multiplier.

At this point they might as well just charge per Million tokens like every other provider instead of having a subscription.

altmanaltman•16m ago
> At this point they might as well just charge per Million tokens like every other provider instead of having a subscription.

Pretty sure that's what they will eventually do

rvnx•53m ago
One theory of the play of SpaceX might do if everyone migrates to query-based billing:

Provide cheap and unlimited access to Grok for programmers (hence the Cursor partnership/purchase for distribution).

-> This would drag massive revenue right before the IPO announcement, like if the company is super growing

-> At a loss, but don't worry, we need these funds to build the biggest datacenter of the universe.

This announcement would create enough momentum to increase valuation, and because of the merge of his companies, would save his X/Twitter investors from a tragedy.

-> Would also be a great service to Cursor investors and so, who are stuck with their VSCode fork

minimaxir•52m ago
It takes longer to build a datacenter with that much capacity than it does for the market to respond.
gigiogigione•7m ago
I don’t get the SpaceX reference. I thought they made rockets?
giwook•50m ago
Lots of us have noticed that usage limits for Claude have been nerfed in recent weeks/months.

If anything, these new multipliers are more transparent than anything OpenAI or Anthropic have communicated regarding actual costs and give us a more realistic understanding of what it's costing these providers.

The fact that we were able to get such a substantial amount of usage for $20/$100/$200 a month was never meant to last and to think otherwise was perhaps a bit naive.

This feels like a strategy from the ZIRP era of tech growth where companies burned investor capital and gave away their products and services for free (or subsidized them heavily) in order to prioritize user acquisition initially. Then once they'd gained enough traction and stickiness they'd then implement a monetization strategy to capitalize on said user base.

dualvariable•16m ago
However, inference costs for entirely good enough models are likely to keep declining in the future. We're probably hitting diminishing returns on model size and training. The new generations aren't quantum leaps anymore, and newer generations of open source models like DeepSeek are likely to start getting good enough.

There's going to be a limit to how much they can raise prices, because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI--and that's a business model right there. Right now I'm sure there's a lot of people who will protest that they couldn't do their jobs with lesser models, but as time goes on that will get less and less. Already right now the consumers who are using AI for writing presentations, cooking recipe generation and ELI5 answers for common things, aren't going to be missing much from a lesser model. That'll actually only start to get cheaper over time.

Also for business needs, as AI inference costs escalate there comes a point where businesses rediscover human intelligence again, and start hiring/training people to do more work to use lesser models--if that is more productive in the end than shelling out large amounts of cash for inference on the latest models. [Although given how much companies waste on AWS, there's a lot of tolerance for overspending in corporations...]

Fire-Dragon-DoL•8m ago
I hope it's true, but right now hardware prices are insane
whateveracct•47m ago
"eras" tend to not be so short lol
Mattwmaster58•34m ago
FYI, these are the multipliers for annual plan. I would hazard a guess most people are not on an annual plan
skeeter2020•27m ago
"This change aligns Copilot pricing with actual usage and is an important step toward a sustainable, reliable Copilot business and experience for all users."

I see statements like this as strong indicators that the sales people are wrapping up their work and the accountants are taking over. The land rush is switching to an operational efficiency play.

siva7•22m ago
That's so unfair to us hard working developers. A month ago i could buy for .4$ a turn with Sonnet. Now i have to pay at least .9$ for this turn. Weeks ago i could buy for .12$ an Opus turn after they already raised prices and now they want .27$ from me for the same product! They are stealing from us!
Ilaurens•1h ago
"Your plan pricing is unchanged: Copilot Pro remains $10/month and Pro+ remains $39/month, and each includes $10 and $39 in monthly AI Credits, respectively."

If there's no discount on credits (in terms of tokens per dollar) over other providers, I'm going to switch to a PAYG provider. If there's a month where there's little to no coding I can pocket the 10$. What incentive do they give to stay with this plan?

freedomben•1h ago
This was my first thought too. "Oh cool, I should be seeing lower prices" as I don't use Co-pilot that often anymore. But no, that's not the case. It rather served to remind me that I should probably just cancel.
stetrain•57m ago
They could add rollover balances and be back to cell phone plans in the early 2000s.
cush•54m ago
Are you thinking something like rollover plans?
Someone1234•53m ago
Yep.

Or if you're a business with multiple seats, these plans may be more inefficient than raw API usage billing. Since if anyone at your organization fails to utilize their full $19/39 allotment each month, that's wasting money, whereas with API credits it is 100% utilized.

I don't think they've thought through the implications of this. Everyone should cancel and go usage-based billing with caps.

DominikPeters•49m ago
They mention in the announcement that it will be possible to pool usage across an organization.
to11mtm•47m ago
They do address this in the doc, Orgs can now (although it was vague as to whether it was an option or just the new standard, probably option due to business contracts) 'pool' the Usage billing across all users.

I'm guessing they did that (and the 'temporary bonus credits') to make the pill easier to swallow for that side of customers.

Someone1234•39m ago
You're right, I missed that.

It still does make one wonder, why have seats at all though? If everyone is just in one big API credit pool - what do the seats/users accomplish?

Mattwmaster58•32m ago
For orgs, each user was allotted their own quota. For messages beyond that quota, a pooled budget is available.
Ronsenshi•1h ago
Just got an email with this announcement.

I have Copilot Pro that I use occasionally, but not enough to tell how the switch to per use would affect my usage.

Based on description Pro plan users will get $10 in monthly AI Credits, but that seems rather low compared to what you could use same plan until now.

nine_k•1h ago
> rather low compared to what you could use same plan until now.

That's exactly where the subsidy is being removed.

metahost•1h ago
Here goes my Copilot Pro subscription then, reluctantly heading over to Codex CLI since the CC base plan is downright unusable.
twistedcheeslet•1h ago
End of an era for predictable costs as a small business. We will refer to these times as ‘the good old days’.
redsaber•1h ago
some of Github's open source maintainers have lost their free github copilot pro, guess this is really the next step for them to save cost in their infrastructure.
everfrustrated•1h ago
Current multipliers vs from June

  Opus 4.6  3x -> 27x
  Opus 4.7  3x -> 27x
  GPT  5.4  1x ->  6x
alecsm•1h ago
GPT 5.4mini is even worse, from 0.33x to 6x which is ~18 times more expensive now.
minimaxir•59m ago
GPT-5.4 and GPT-5.4 mini are now the same price, which, why?
t-sauer•35m ago
Those multipliers will only apply if you are currently on an annual subscription (and only until your renewal comes up or you cancel). So I assume they simply want to make it as unattractive as possible to get most people to cancel it and move to the token based system.
motoboi•58m ago
Not apples for apples.

Before:

- Opus 4.6 each premium request is 3 premium requests

After:

- Opus 4.6 each dollar spent is 27 dollars in copilot AI Credits.

Given that you'll receive 19 dollars of AI Credits in Business plan, that means you can probably say 1 "hi" to opus per month.

t-sauer•37m ago
It is an apples for apples comparison since those new multipliers only count if you are on an annual plan in which case the premium request system stays in place until you either cancel and get a refund or until your renewal comes up. https://docs.github.com/en/copilot/concepts/billing/usage-ba...

If you are not on an annual plan, multipliers will be gone completely. You can see the rates that apply instead here: https://docs.github.com/en/copilot/reference/copilot-billing...

kristjansson•29m ago
Based on the pricing and comparing to competitors e.g. bedrock[1] looks like cache-write will only be on 5 minute TTL.

[1]: https://aws.amazon.com/bedrock/pricing/

bachmeier•55m ago
GPT 4.1 had a multiplier of 0.
kristjansson•33m ago
I think that only applies to held-over users on the annual plan:

> Users on annual Pro or Pro+ plans will remain on their existing plan with premium request-based pricing until their plan expires, however, model multipliers will increase on June 1 (see table).

netule•1h ago
I pay for Copilot annually, and mostly for its code auto completion features. I use CC if I want to do anything agentic. Not sure if I want to pay more for occasionally-good-intellisense at this point.
KronisLV•44m ago
Same! I wonder what other alternatives there might be for autocomplete.
netule•25m ago
If you find out, please let me know!
alecsm•1h ago
So I guess from now on GH Copilot is only worth it if you want a quality autocomplete in VSCode.
Waterluvian•25m ago
That was the first thing I turned off in VSCode. Autocomplete for my TypeScript projects was great. And the "AI" suggestions/completions were really getting in the way of me still being the "driver."
thinkingtoilet•1h ago
People need to wake up and stop being surprised by these billing increases. I see it on every update of every model. This was all subsidized by VC and company money. Now they need a return and the prices will keep going up. Be glad that you took advantage of that up until now, but can we stop the pearl clutching when we all know the amount of money being dumped into AI and the lackluster returns?
minimaxir•1h ago
It's less surprise, but more confusing given the game theory as their competitors are not doing the same thing and the multiplier changes alone will likely churn current users.
sefrost•1h ago
I was curious why a company would still use the VS Code + Copilot sidebar method for coding, rather than something like Claude Code. Turns out there’s a GitHub Copilot CLI!

I thought I was pretty familiar with available options, but no one in my circles ever mentions this product. It doesn’t seem to have much mindshare.

Has anyone used it? What’s your experience?

https://github.com/features/copilot/cli

saratogacx•1h ago
I've used it quite a bit. There are a lot of AI terminal coding products and this is another one. It works well, handles sub-agents without issue and does a reasonable job operating in the Copilot ecosystem. It handles mid-task questions and such we well.
sefrost•59m ago
I’ve tried OpenCode, Claude Code and Codex CLI. But was just shocked that Microsoft has a version I hadn’t even heard of.

Personally I got CLI fatigue and am happy with Conductor for now, but things are moving fast in this space.

bsdz•1h ago
I've used it. It's on par with OpenCode imho.
KronisLV•45m ago
> I was curious why a company would still use the VS Code + Copilot sidebar method for coding, rather than something like Claude Code.

I use Claude Code, but I kept my Copilot subscription around mostly for really cheap usage of other models when I need to try a different one (which appears to be ending, in a sense) and also the autocomplete in Visual Studio Code which was really great across a bunch of files, I could make changes in one file and then just tab through some others.

I wonder what other good autocomplete is out there.

on_the_train•36m ago
Because Copilot is the only thing allowed at our corp
boc•35m ago
I'm just so confused why people aren't just using ghostty/kitty/terminal.app and claude code. Compared to the other approaches I've tried, it's by far the most effective way to get performance from opus 4.6/4.7
Austizzle•25m ago
The vs code integration is pretty slick. I can copy and paste function names into the prompt and it automatically turns them into these `#sym:` reference objects that I presume populate the context window with metadata about the function and where it lives. It knows what file I'm currently looking at as I jump around in the code, and that automatically gets loaded into the context. I can also drag and drop folders or specific files for context into the sidebar.

It's a lot of stuff that makes me have to type less into the prompt, since it's already getting so much info from my editor

ramesh31•23m ago
It's essentially a carbon copy of Claude Code, but with a 7x multiplier for Opus tokens. Totally unusable compared to a Claude Max plan.
danbrooks•17m ago
I tried the VS Code + Copilot sidebar approach a few months ago. It was definitely rough around the edges compared to Cursor/Claude. In our corporate environment, we weren't even able to use frontier models.
brunoborges•14m ago
The other cool thing is Copilot SDK, so you can build agentic capabilities into apps, or build tools, that leverage the agent harness of the Copilot CLI:

https://github.com/github/copilot-sdk/

csomar•13m ago
Search has become so bad that I also struggled to find Claude Code alternative and made my own tight (not editors, not plugins, not agents, strictly similar to Claude Code CLI) list: https://github.com/omarabid/cli-llm-coding

The list is not long but there are quite a few options. Even Grok has its own CLI!

The reality is, even though a CLI prompt looks very simple, it's a very complex piece of software. I personally use Claude Code (with GLM) and anything else I have tried was significantly inferior (with the exception of opencode).

silverwind•1h ago
TLDR: It's a 6-9x price increase
4ndrewl•1h ago
"Plan prices aren’t changing.”

Isn't this like saying "The Porsche you rented at $200/mo is now a Honda. But the price hasn't changed!"

deeviant•57m ago
It's more like saying, "and you may now only use the Porsche for 5 minutes out of every day."
Waterluvian•57m ago
"Your monthly fee isn't changing but it now only covers about 3 days of driving."
canada_dry•53m ago
This may be a more accurate analogy... "The Porsche you rented at $200/mo now only allows you a maximum of 100km of travel. You will be automatically charged extra when you go over that."
adgjlsfhk1•9m ago
more like 100m
herrj•1h ago
cursor, windsurf, and CC are all already on usage-based models so I guess what really matters is whether Copilot's GitHub integration depth justifies the price per token vs the alternatives
grey-area•1h ago
How is this legal when people paid for a yearly plan in advance?
mgrund•59m ago
My thought exactly! First the usage limits + model limitations and now fundamental change to the billing. Hope some consumer watchdogs are looking into this!
boromisp•56m ago
I doubt you can force them to provide the service with the original terms, but you might be able to ask for a (partial) refund. If not today, after a week of verbal abuse they will receive for this online.
yladiz•51m ago
It depends where you’re located. In the EU they have to honor the contract you entered, but presumably there is a clause that they can prematurely terminate the contract without cause and give you all of your money back (from the start of the contract).
bityard•55m ago
In order to most-to-least charitable, any of:

1. Github could choose to grandfather in those plans and make no changes until those plans expire.

2. Github could offer, or the user could request, a pro-rated refund along with cancellation of the account.

3. Tough luck, those users agreed that Github could unilaterally change the ToS at any time.

javawizard•16m ago
> 1. Github could choose to grandfather in those plans and make no changes until those plans expire.

They explicitly stated that they won't be doing that: the multipliers go into effect in June for everyone, annual plan or not.

dist-epoch•49m ago
For the yearly plan they only change the model multiplier. And it's in the subscription contract they can change that multiplier at any time.
drawfloat•23m ago
I just checked and you can cancel with a refund.
ReptileMan•1h ago
I really don't understand why OpenAI, Anthropic and Microsoft are in competition to see which one of the three will elevate deepseek the most.
dist-epoch•51m ago
DeepSeek will do the same thing.

Z/Mimo already raised their prices multiple times since the promotional prices at the start of the year.

simonw•1h ago
Windsurf made a similar change in March: https://docs.windsurf.com/windsurf/accounts/quota

> In March 2026, Windsurf replaced the credit-based system with a quota-based usage system. Instead of buying and spending credits, your plan now includes a daily and weekly usage allowance that refreshes automatically.

With hindsight, per-request pricing makes no sense at all if an agent can burn a widely varying amount of tokens satisfying that request. These pricing plans were designed before coding agents changed the dynamics of token usage.

Incipient•41m ago
I wouldn't call it hindsight - I don't think anyone, at any stage, thought running a 10 minute+ sonnet session for 1 premium credit was ever profitable. We all knew it was a loss leader to get people using it.
Lihh27•20m ago
per-request was broken, yeah. but $10 of monthly credits is basically just a prepaid wallet with a reset timer.
semiquaver•59m ago
Whose idea was this “premium request” model anyway? If you’re going to invent a new metric used to bill, why not align it with what, even at the time, was a clear underlying cost structure that GitHub actively chose to ignore for a more confusing system.
Waterluvian•55m ago
I'm not usually a Conspiracy Guy, and the answer is probably `incompetence * tech_debt`. But I think that having sufficient layers of abstraction to any billing model is a useful way to hide the real cost of things. It's why it's done everywhere.
DominikPeters•40m ago
This approach started with the “Ask a question about your code” feature, which is more comparable to single chat message with relatively predictable token usage. Now it’s an agent who might work for 30 minutes, read the whole codebase, and write 1000 lines
kingstnap•36m ago
It made more sense in the ye old days where a request was basically just a chat message in a sidebar and it could also edit code. Then saying someone can use 300 chat messages a month kinda makes sense.

Turns out when a request can spawn tens of subagents and use millions of tokens over many turns of toolcalls then suddenly github copilot has a massive financial problem on their hands.

dude250711•59m ago
Which one is it:

1. Current models in fact do not solve coding.

2. You can simply wait for a ~year for open-source to catch up and run it locally.

gpm•43m ago
Re 1: Current models don't solve coding. They are useful tool for it though.

Re 2: Open weight models seem to be less than a year behind proprietary ones, so sure, if you're willing to spend tens or hundreds of thousands of dollars on a super computer that you probably don't fully utilize instead of renting time on someone else's super computer for a lot less.

bachmeier•58m ago
I'm happy I invested in local solutions and cutting context to the bone for API providers. Claims about AI being able to fully replace programmers never took into account the long-run equilibrium price of inference.
dist-epoch•44m ago
Already there are companies paying more for coding tokens than for programmer salaries.
xienze•14m ago
Doesn't necessarily mean that's a good idea.
stabbles•57m ago
I was surprised to find that this sentence

> Plan prices aren’t changing

did not continue with an em-dash followed by something profound that is changing.

Plan prices aren't changing -- the value you get out of it is.

miroljub•55m ago
This subsidized inference is just a marketing ploy to increase prices and profit.

If common people can have a DIY setup with an open source model cheaper than those behemoths with a scale advantage, it's clear that we have been played.

Time to either self host a Chinese open source model or to just pay the cheap Chinese providers.

speedgoose•53m ago
So about a month left before cancelling. Got it.
postalcoder•53m ago
Github had, by far, the most easily game-able agent usage policy. People would force the agent to run a script before the end of turns that consisted entirely of `input("prompt: ")` so that you could essentially talk endlessly to an agent for the price of a turn. I see this less about the future of this industry and more about fighting the costs incurred by bad actors.
to11mtm•52m ago
... Once again the Business accounts get all sorts of goodwill [0] and users get the shaft.

[0] - Last weeks changes limited my personal Copilot Pro account but not my Work one

ValentineC•17m ago
What "goodwill"? It's just more "AI credits" for what will be a shit product in June.
gyoridavid•52m ago
After seeing the ridicolous multiplier increase I've added a calendar event to cancel my subscription mid-May.

(I'm a copilot subscriber since 2022)

nickjj•49m ago
I don't use Copilot or any paid AI but all of this usage-based billing reminds me of cellphones back when you paid per individual text message.

Usage paying for AI is 1000x crazier because you're not even getting a guarantee in the thing you pay for in the end. You have to keep feeding it prompts and hope it gives you the solution you want. You may end up with no expected result yet you are paying for it. At least with texting, you got what you paid for.

I wonder how long it'll be before all AI costs are flat unlimited monthly fees or even free across the board, without compromise.

deweller•49m ago
Has anyone found the answer to this yet?

> What is the benefit of using the Copilot Pro+ at 39$/month instead of using the Copilot Pro at 10$/month and paying for extra usage?

to11mtm•40m ago
If I had to guess...

On my personal account, Copilot Pro+ still only gave me back Opus 4.7, whereas my work's Pro account still lets me use Opus 4.6.

So, my gut says, it's entirely possible that Pro+ will continue to have more segregation on model availability...

FTA

> Last week, we also rolled out temporary changes to Copilot Individual plans, including Free, Pro, Pro+, and Student, and paused self-serve Copilot Business plan purchases. These were reliability and performance measures as we prepare for the broader transition to usage-based billing. We will loosen usage limits once usage-based billing is in effect.

There's enough weasel wording here that I would expect only certain models get re-enabled on Pro.

e.x. lots of people seem to get good enough results from Opus 4.6, personally I prefer it over 4.7 in GH Copilot... locking that down to Pro+ would be, given this salvo of enshittification, a 'logical' move on their part.

bewuethr•33m ago
Some models, for example Opus 4.7 and GPT 5.5, are only available on Pro+; Pro+ has audit logs and GitHub Spark; that's about it, as far as I can tell from https://docs.github.com/en/enterprise-cloud@latest/copilot/g...
immanuwell•48m ago
tldr: people were running multi-hour agentic coding sessions for the same flat fee as a one-liner autocomplete, github was eating the bill, and that party's over on june 1st
CraigJPerry•45m ago
The cheapest copilot plan felt totally unsustainable to me. For around £8 month i was getting 100 opus 4.6 prompts (albeit with a reduced context window size around 128k iirc vs 200k to 1m for first party hosted opus). Gpt5.4 was hosted with 400k context iirc.

On top of that, you’ve got 2000minutes of container runtime, so running cloud agents was included. As was anthropic agent sdk mode via copilot which is very comparable with claude code - not identical, the anthropic “modular prompt” is much leaner in the sdk version.

I cant say im mad, i got above what i paid in value. That said, going forward ill probably go back to openrouter payg rather than a subscription.

I got a free 3months of the gemini £19 plan and ive been playing quite a bit, 3.1 pro is a good model, i just find it slow. Flash i think i under appreciated until now.

born_a_skeptic•30m ago
I wonder if GitHub (Microsoft) is implicitly betting that enterprise demand is sticky enough to absorb these rates, especially given that Opus 4.6 “fast” was being listed at a 27x multiplier. Maybe they saw enough usage at that price point to conclude the demand is real. Or maybe the strategy is to keep the enterprise customers who can justify it while shedding heavier individual and power-user usage.

The interesting question is how long it takes enterprises to notice the capability/pricing tradeoff, and whether they respond by limiting access to the strongest models internally.

The part that worries me is that this market is still very early. Most developers and organizations are still learning how to use these tools effectively. Raising the experimentation cost this much may slow down the discovery process that makes the tools valuable in the first place.

JBlue42•3m ago
As someone that is on the enterprise side in a non-tech F500 company, what I'm seeing is some FOMO and need to be part of the hype cycle. We're about to plonk a bunch of money on more Copilot licenses. Something got in the water where all the C-levels the past two months are pushing everyone to use AI but when they bring up examples of their uses its like "I use it to rewrite my emails" or prompt 'engineering' ideas that point more to patching over poor processes, data management, and decision-making within the organization or not.

What we're seeing across the board is every software company tossing AI onto their name or sales pitch and no one understanding what that actually means. But we will spend money on it because of FOMO.

I really question if we're reaching the end of the hype cycle to the point. I wish I were brave enough to put money on it. It feels like there was a command from up top to 'do something with AI' and leadership is scambling for some resume-building projects vs doing the hard work they should've done the past two years at a people and process level.

Jayakumark•24m ago
Does this mean you can only prompt "Hello" every morning for a month with Opus 4.7 ?
edzitron•19m ago
vindication! https://www.wheresyoured.at/exclusive-microsoft-moving-all-g...
elashri•14m ago
So we left the times when we struggled to estimate the return of investment on all the predicted LLM tokrns usage to the times when we even don't know for sure how much tokens for the same amount of money?
Abby_101•10m ago
Built credit pricing into my SaaS for AI features and the hardest part wasn't the math, it was that customers can't easily predict their own usage. They underuse and feel cheated, or overuse and churn. Subscriptions hide that volatility from the customer. Usage based pricing makes it their problem, which is honest but harder to sell.
wvenable•6m ago
As a Github Copilot user, who mostly just uses chat in the VS Code editor but still burns through my Pro limit every month -- what's the best alternative price to performance? Claude Code?
Gagarin1917•4m ago
I keep hearing that Codex is the best bang for your buck now.

OpenAI Models Coming to AWS

https://twitter.com/ajassy/status/2048806022253609115
1•ke4qqq•3m ago•0 comments

Behind the system of private market for artifacts[video]

https://www.youtube.com/watch?v=lr7Bb93-ZaE
1•nalinidash•3m ago•0 comments

Steam Controller: It's almost here

https://store.steampowered.com/news/group/45479024/view/508485755865137686
2•Philpax•5m ago•0 comments

How to Choose Hardware for Running Local LLMs

https://www.madebyagents.com/blog/how-to-choose-hardware-for-running-local-llms
1•tobiaswup•6m ago•0 comments

Where to Go?

1•randomcookiesss•6m ago•0 comments

China blocks Meta's $2B purchase of AI group Manus

https://www.ft.com/content/1e4c269a-5258-406c-a308-e55c3d5d640f
1•type4•6m ago•0 comments

Claude-powered AI coding agent deletes company database in 9 seconds

https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent...
4•vanburen•6m ago•0 comments

Who Is That Knocking at My (SSH) Door?

https://sheep.horse/2026/4/who_is_that_knocking_at_my_%28ssh%29_door.html
1•speckx•7m ago•0 comments

Body motion linked to fluid movement in the brain

https://medicalxpress.com/news/2026-04-hydraulic-brain-body-motion-linked.html
1•OutOfHere•7m ago•0 comments

Training for a Marathon Changes After 50, According to Runners and Coaches

https://www.runnersworld.com/training/a70522587/marathon-training-after-age-50/
1•teleforce•8m ago•0 comments

An AI interviews the author of a book it read

https://claudereviews.com/interview.php?view=justin-feinstein
1•YBWBM•8m ago•1 comments

Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model

https://simonwillison.net/2026/Apr/22/qwen36-27b/
2•gmays•9m ago•0 comments

Unfounded Health Concerns Are Powering a Solar Backlash

https://www.propublica.org/article/michigan-solar-farms-health-concerns-st-clair-county
3•ourmandave•9m ago•0 comments

The Vado 3 EVO is like a two-wheeled wearable

https://coolhunting.com/design/specialized-keeps-digitizing-the-commuter-e-bicycle/
1•megamike•10m ago•0 comments

The reputation system we designed for AI agents (NOT BLOCKCHAIN)

1•artem_am•10m ago•0 comments

2026.04.26: Trinity Desktop R14.1.6 Released

https://trinitydesktop.org/newsentry.php?entry=2026.04.26
1•calvinmorrison•11m ago•0 comments

Fogbank

https://en.wikipedia.org/wiki/Fogbank
1•arunbahl•11m ago•0 comments

A data model for a gamification system

https://trophy.so/blog/gamification-data-model
1•cbrinicombe•11m ago•0 comments

Can agentic AI consent on your behalf?

https://blog.avas.space/agentic-consent/
1•speckx•12m ago•0 comments

Marc Brooker – Learning from 3000 Incidents and How Engineering Is Changing

https://www.youtube.com/watch?v=u3GjIXP9N0s
1•Brysonbw•14m ago•0 comments

The AI Con

https://thecon.ai/
2•lbrito•15m ago•0 comments

A Mutating AI Powered Virus

https://geohot.github.io//blog/jekyll/update/2026/04/25/a-mutating-virus.html
1•bsgada•15m ago•0 comments

AGPL: The Moral AI License

https://jackson.dev/post/moral-ai-licensing/
1•Arcuru•16m ago•0 comments

Interstellar Comet 3I/Atlas Left a Trail of Methane in Its Wake

https://www.universetoday.com/articles/interstellar-comet-3iatlas-left-a-trail-of-methane-in-its-...
1•rbanffy•16m ago•1 comments

Oracle's Deluge of AI Debt Pushes Wall Street to the Limit

https://www.wsj.com/tech/ai/oracle-ai-demand-debt-04977749
1•gmays•17m ago•0 comments

You Forget DSA So Fast (and How to Fix It)

https://www.youtube.com/watch?v=vpFqWwYMrBs
1•Brysonbw•18m ago•0 comments

I can never talk to an AI anonymously again

https://www.theargumentmag.com/p/i-can-never-talk-to-an-ai-anonymously
1•mellosouls•20m ago•0 comments

Show HN: Figma plugin that extracts local styles and generates DESIGN.md

https://github.com/bergside/design-md-figma
1•elwingo1•20m ago•0 comments

Install and Configure PHP on Ubuntu 26.04 – MeshWorld

https://meshworld.in/blog/devops/linux/ubuntu/install-php-ubuntu-26-04/
1•cooldashing24•21m ago•0 comments

Why Is Everything Proprietary These Days?

https://kevquirk.com/why-is-everything-proprietary-these-days
2•speckx•21m ago•0 comments