frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

IRS open-sources Direct File tax software amid political and industry pushback

https://www.zdnet.com/article/irs-open-sources-direct-file-tax-software-amid-political-and-industry-pushback-heres-why/
1•CrankyBear•1m ago•0 comments

Controversial genetics testing startup Nucleus Genomics raises $14M Series A

https://techcrunch.com/2025/01/30/controversial-genetics-testing-startup-nucleus-genomics-raises-14m-series-a/
1•ZeljkoS•1m ago•0 comments

See how a dollar would have grown over the past 94 years [pdf]

https://www.newyorklifeinvestments.com/assets/documents/education/investing-essentials-growthofadollar.pdf
2•mooreds•2m ago•0 comments

Ask HN: How are you using Markdown files these days?

1•fred_terzi•3m ago•0 comments

Paul Ford: Is AI a big scam, or will our jobs be on the chopping block by 2026?

https://aboard.com/the-extremely-human-last-mile/
1•gbseventeen3331•5m ago•1 comments

The Flow of the River (1953) by Loren C. Eiseley [video]

https://www.youtube.com/watch?v=QxNhHKISP-A
1•ShrugLife•5m ago•0 comments

Verifying a 5μS TAS of Super Mario Bros 3 on real hardware

https://www.youtube.com/watch?v=BGvvY5FOTL8
1•indrora•5m ago•0 comments

GOP intensifies war against EVs and efficient cars

https://arstechnica.com/cars/2025/06/gop-intensifies-war-against-evs-and-efficient-cars/
2•sleepyguy•5m ago•0 comments

We're close to translating animal languages – what happens then?

https://www.theguardian.com/world/2025/jun/01/were-close-to-translating-animal-languages-what-happens-then
2•bookofjoe•5m ago•0 comments

Trump's 'Golden Dome' plan has a major obstacle: Physics

https://www.sciencenews.org/article/golden-dome-missile-defense-physics
2•gbseventeen3331•6m ago•0 comments

1,065 Unique Dog Names from the Middle Ages (2024)

https://www.medievalists.net/2024/11/dog-names-middle-ages/
1•mooreds•6m ago•0 comments

The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text

https://arxiv.org/abs/2506.05209
1•yorwba•6m ago•0 comments

Load Testing Software Systems

https://klaviyo.tech/breaking-point-f644765a8293
1•mooreds•8m ago•0 comments

I made a waffle machine robot

https://www.youtube.com/watch?v=doiXprctuzs
1•ariwasch•9m ago•0 comments

Amino acids as catalysts in the emergence of RNA

https://www.lmu.de/en/newsroom/news-overview/news/amino-acids-as-catalysts-in-the-emergence-of-rna.html
1•geox•10m ago•0 comments

Show HN: ReqText – Create and Manage PRDs as a Local Markdown File - open source

https://github.com/fred-terzi/reqtext
1•fred_terzi•11m ago•0 comments

The Ideological Subversion of Biology

https://skepticalinquirer.org/2023/06/the-ideological-subversion-of-biology/
2•peanutcrisis•11m ago•0 comments

Find in Michigan shows the extent of ancient Native American agriculture

https://www.npr.org/2025/06/06/nx-s1-5423660/surprise-ancient-native-american-agriculture
2•marojejian•11m ago•1 comments

Silicon Valley aghast at the Musk-Trump divorce

https://www.ft.com/content/df15f13d-310f-47a5-89ed-330a6a379068
6•gitgudflea•11m ago•1 comments

Show HN: Intlayer – Lightweight Internationalization for Preact

https://intlayer.org/doc/environment/vite-and-preact
1•aypineau•12m ago•0 comments

The race to find GPS alternatives

https://www.technologyreview.com/2025/06/06/1117978/inside-the-race-to-find-gps-alternatives/
1•rbanffy•12m ago•0 comments

The latest Gemini 2.5 Pro reflects a 24-point Elo score jump on LMArena

https://blog.google/products/gemini/gemini-2-5-pro-latest-preview/
1•chandureddyvari•13m ago•0 comments

Frequent wildfires are turning forests from carbon sinks into super‑emitters

https://phys.org/news/2025-05-frequent-large-scale-wildfires-forests.html
1•PaulHoule•13m ago•0 comments

3D Simulation of Alan Turing's Pilot Ace Computer

https://pilotace.virtualcolossus.co.uk/pilotace/
1•virtualcolossus•14m ago•0 comments

Australians may soon be able to download iPhone apps from outside Apple App Stor

https://www.theguardian.com/technology/2025/jun/06/australians-may-soon-be-able-to-download-iphone-apps-from-outside-apple-app-store-under-government-proposal-ntwnfb
1•OptionOfT•14m ago•0 comments

Printing the web: making webpages look good on paper

https://piccalil.li/blog/printing-the-web-making-webpages-look-good-on-paper/
1•JadedBlueEyes•14m ago•0 comments

Facet: Reflection for Rust

https://fasterthanli.me/articles/introducing-facet-reflection-for-rust
2•shiryel•15m ago•0 comments

Recovering control flow structures without CFGs

https://purplesyringa.moe/blog/recovering-control-flow-structures-without-cfgs/
1•todsacerdoti•16m ago•0 comments

The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text

https://twitter.com/AiEleuther/status/1931021637991755906
1•EnricoShippole•16m ago•1 comments

Walmart plans to expand drone deliveries to three more states

https://www.cnbc.com/2025/06/05/walmart-expands-drone-deliveries.html
2•pseudolus•18m ago•0 comments
Open in hackernews

Anthropic co-founder on cutting access to Windsurf

https://techcrunch.com/2025/06/05/anthropic-co-founder-on-cutting-access-to-windsurf-it-would-be-odd-for-us-to-sell-claude-to-openai/
105•jawns•16h ago

Comments

nickthegreek•15h ago
Now your company needs to worry about building workflows on the back of these neat AI companies, they got bought out and cause a disruption that can reverberate through the org in unknown ways for unknown amounts of time. This and the openai no deletion court order should be a wake up call on how these technologies get implemented and the trust we cede to them.
sebmellen•15h ago
It’s no different from the shell games that have been going on in enterprise software for the past 30 years.

You can choose independence if you’re willing to use a slightly worse open weight model.

icelancer•15h ago
This seems pretty reasonable to me. I don't really understand why Windsurf, owned by OpenAI (allegedly), should expect to have API access to their main competitor's API. People can still bring their own key and use whatever models they want in that way.
BoorishBears•15h ago
It doesn't make sense and the co-founder is being an intentionally crappy communicator.

He's trying to paint it as removing access to something everyone gets by default, but in reality it's removing special treatment that they were previously giving Windsurf.

The default Anthropic rate limit tiers don't support something Windsurf sized, and right now getting increased rate limits outside of their predefined tiers is an uphill battle because of how badly they're compute constrained.

consumer451•9h ago
> Earlier this week, Windsurf said that Anthropic cut its direct access to Claude 3.5 Sonnet and Claude 3.7 Sonnet, two of the more popular AI models for coding, forcing the startup to find third-party computing providers on relatively short notice. Windsurf said it was disappointed in Anthropic’s decision and that it might cause short-term instability for users trying to access Claude via Windsurf.

As a Windsurf user, maybe Sonnet 3.x models are slower right now, but they don't require BYOK like 4 does. So this is a bit of an exaggeration, isn't it? Anthropic did not cut them off, they seem to have continued honoring existing quota on previous agreements.

What did Windsurf think was going to happen with this particular exit? Also, how embarrassing is this for OpenAI that it's even a big deal?

tuyguntn•15h ago
This seems odd to me, I don't expect bakery to reject selling me a bread this morning, because I started working in another bakery nearby.
charlie0•15h ago
No, but if YOU opened up a bakery and were packing their bread and re-selling it, they might not like that. This happened before and it's not great.

https://www.businessinsider.com/restaurant-accused-reselling...

freehorse•14h ago
This is more like about cutting you off reselling their bread, which would be reasonable.
zmgsabst•12h ago
Is it?

Grocery and department stores routinely have brands that compete with those they resell — but they’re not cut off for that. Eg, Kroger operates its own bakery and resells bread.

What makes technology unlike those?

smileeeee•4h ago
Stores regularly sell boxes of things labelled "not for individual resale". Here, windsurf would be the shopkeeper selling individual chewing gum strips out of the big package of 10x 5 strips.
tuyguntn•7h ago
By this logic NVIDIA should cut off Claude, because Claude is reselling their GPU hours, when NVIDIA itself can do it.

IMHO, there are 3 types of products in current LLM space (excluding hardware):

   * Model makers - OpenAI, Claude, Gemini
   * Infra makers - Groq, together.ai and etc
   * Product makers - Cursor, Windsurf and others
If Level 1 can block Level 3 this easily, that's a problem for industry in my book. Because there will be no trust between different types of companies, when there is no trust, some companies become monopoly with a bad behavior, bad for customers/users
hiddencost•12h ago
IDK, I pay someone money for a service, I would like the terms of our contract to protect me from getting cut off capriciously. The fact that Anthropic is selling a service without providing a contract that provides consumers with protections kinda sucks?
hiddencost•12h ago
Also, like, anti competitive behavior is kinda ... Illegal?
threeseed•12h ago
No it's not. It's only illegal if you are found guilty of it.

And for that to happen you need to be (a) an effective monopoly, (b) have a negative direct or indirect impact on consumers, (c) large enough for regulators to care about and (d) be in a regulatory environment that priorities this enforcement.

rafram•12h ago
Not all anticompetitive practices are illegal, but this isn’t even anticompetitive.

Anticompetitive practices are actions that reduce competitiveness in a market by entrenching your dominance over the (usually smaller) competition.

Not allowing your competitor to buy your product arguably increases competition? It pushes them to improve their own product to be as good as yours.

princealiiiii•15h ago
Any app built on top of these model providers could become a competitor to these providers. Since the model providers are currently in the lowest-margin part of the business, it is likely they will try to expand in to the app layer and start pulling the rug from under these businesses built on top of them.

Amazon had a similar tactic, where it would use other sellers on its marketplace to validate market demand for products, and then produce its own cheap copies of the successes.

liuliu•15h ago
Or AWS, and AWS managed services v.s. other managed services on top of AWS.
jsnell•15h ago
The model providers are not in the low margin part of the business. The unit economies of paid-per-token APIs are clearly favorable, and scale amazingly well as long as you can procure enough compute.

I think it's the subscription-based models that are tricky to make work in the long term, since they suffer from adverse selection. Only the heaviest users will pay for a subscription, and those are the users that you either lose money on or make unhappy with strict usage limits. It's kind of the inverse of the gym membership model.

Honestly, I think the subscriptions are mainly used as a demand moderation method for advanced features.

raincole•15h ago
> The model providers are not in the low margin part of the business.

Many people believe that model providers are running at negative margin.

(I don't know how true it is.)

apothegm•14h ago
They probably have been running at negative margin, or at the very least started that way. But between hardware and software developments, their cost structures are undoubtedly improving over time —- otherwise we wouldn’t be seeing pricing drop with each new generation of models. In fact, I would bet that their margins are improving in spite of the price drops.
jsnell•14h ago
Yes, many people believe that, but it doesn't seem to be an evidence-based belief. I've written about this in some detail[0][1] before. But since just linking to one's own writing is a bit gauche and doesn't make for a good discussion, I'll summarize :)

1. There is no point in providing paid APIs at negative margins, since there's no platform power in having a larger paid API share (paid access can't be used for training data, no lock-in effects, no network effects, no customer loyalty, no pricing power on the supply side since Nvidia doesn't give preferential treatment to large customers). Even selling access at break-even makes no sense, since that is just compute you're not using for training, or not selling to other companies desperate for compute.

2. There are 3rd-party providers selling only the compute, not models, who have even less reason to sell at a loss. Their prices are comparable to 1st-party providers.

3. Deepseek published their inference cost structure for R1. According to that data their paid API traffic is very lucrative (their GPU rental costs for inference are under 20% of their standard pricing, i.e. >80% operating margins; and the rental costs would cover power, cooling, depreciation of the capital investment).

Insofar as frontier labs are unprofitable, I think it's primarily due to them giving out vast amounts of free access.

[0] https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

[1] https://news.ycombinator.com/item?id=44165521

what•13h ago
There are more factors to cost than just the raw compute to provide inference. They can’t just fire everyone and continue to operate while paying just the compute cost. They also can’t stop training new models. The actual cost is much more than the compute for inference.
brookst•13h ago
I heart you.

Classic fixed / variable cost fallacy: if you look at the steel and plastic in a $200k Ferrari, it’s worth about $10k. They have 95% gross margins! Outrageous!

(Nevermind the engine R&D cost, the pre-production molds that fail, the testing and marketing and product placement and…)

ummonk•13h ago
That's all the more reason to run at a positive margin though - why shovel money into taking a loss on inference when you need to spend money on R&D?
jsnell•13h ago
Yes, there are some additional operating costs, but they're really marginal compared to the cost of the compute. Your suggestion was personnel: Anthropic is reportedly on a run-rate of $3B with O(1k) employees, most of whom aren't directly doing ops. Likewise they also have to pay for non-compute infra, but it is a rounding error.

Training is a fixed cost, not a variable cost. My initial comment was on the unit economics, so fixed costs don't matter. But including the full training costs doesn't actually change the math that much as far as I can tell for any of the popular models. E.g. the alleged leaked OpenAI financials for 2024 projected $4B spent on inference, $3B on training. And the inference workloads are currently growing insanely fast, meaning the training gets amortized over a larger volume of inference (e.g. Google showed a graph of their inference volume at Google I/O -- 50x growth in a year, now at 480T tokens / month[0])

[0] https://blog.google/technology/ai/io-2025-keynote/

lmeyerov•13h ago
I think you miss 2 big aspects:

1. High volume providers get efficiencies that low volume do not. It comes from both more workload giving more optimization opportunities, and staffing to do better engineering to begin with. The result is break even for lower volume firms is profitable for higher volume, and as high volume is magnitudes more scale, this quickly pays for many people. By being the high-volume API, this game can be played. If they choose not to bother, it is likely because strategic views on opportunity cost, not inability.

That's not even the interesting analysis, which is what the real stock value is, or whatever corp structure scheme they're doing nowadays:

2. Growth for growths sake. Uber was exactly this kind of growth-at-all-costs play, going more into debt with every customer and fundraise. My understanding is they were able to tame costs and find side businesses (delivery, ...), with the threat becoming more about category shift of self-driving. By having the channel, they could be the one to monetize as that got figured out better.

Whether tokens or something else becomes what is charged for at the profit layers (with breakeven tokens as cost of business), or subsidization ends and competitive pricing dominates, being the user interface to chat and the API interface to devs gives them channel. Historically, it is a lot of hubris to believe channel is worthless, and especially in an era of fast cloning.

jsnell•12h ago
> High volume providers get efficiencies that low volume do not

But paid-per-token APIs at negative margins do not provide scaling efficiencies! It's just the provider giving away a scarce resource (compute) for nothing tangible in exchange. Whatever you're able to do with that extra scale, you would have been able to do even better if you hadn't served this traffic.

In contrast, the other things you can use the compute for have a real upside for some part of the genai improvement flywheel:

1. Compute spent on free users gives you training data, allowing the models to be improved faster.

2. Compute spent on training allows the models to be trained, distilled and fine-tuned faster. (Could be e.g. via longer training runs or by being able to run more experiments.)

3. Compute spent on paid inference with positive margins gives you more financial resources to invest.

Why would you intentionally spend your scarce compute on unprofitable inference loads rather than the other three options?

> 2. Growth for growths sake.

That's fair! It could in theory be a "sell $2 for $1" scenario from the frontier labs that are just trying to pump up their revenue numbers to fund-raise from dumb money who don't think to at least check on the unit economics. OpenAI's latest round certainly seemed to be coming from the dumbest money in the world, which would support that.

I have two rebuttals:

First, it doesn't explain Google, who a) aren't trying to raise money, b) aren't breaking out genai revenue in their financials, so pumping up those revenue numbers would not help at all. (We don't even know how much of that revenue is reported under Cloud vs. Services, though I'd note that the margins have been improving for both of those segments.)

Second, I feel that this hypothetical, even if plausible, is trumped by Deepseek publishing their inference cost structure. The margins they claim for the paid traffic are high by any standard, and they're usually one of the cheaper options at their quality level.

lmeyerov•12h ago
I think you ignored both of my points -

1. You just negated a technical statement with... I don't even know what. Engineering opportunities at volume and high skill allow changing the margin in ways low volume and low capitalization provider cannot. Talk to any GPU ML or DC eng and they will rattle off ways here. You can claim these opportunities aren't enough, but you don't seem to be willing to do so.

2. Again, even if tokens are unprofitable at scale (which I doubt), market position means owning a big chunk of the distribution channel for more profitable things. Classic loss leader. Being both the biggest UI + API is super valuable. Eg, now that code as a vertical makes sense, they bought more UI here, and now they can go from token pricing closer to value pricing and fancier schemes - imagine taking on GitHub/Azure/Vercel/... . As each UI and API point takes off, they can devour the smaller players who were building on top to take over the verticals.

Seperately, I do agree, yes, the API case risks becoming (and staying) a dumb pipe if they fail to act on it. But as much as telcos hate their situation, it's nice to be one.

jsnell•11h ago
I don't think I was ignoring your points. I thought I was replying very specifically to them, to be honest, and providing very specific arguments. Arguments that you, by the way, did not respond to in any way here, beyond calling them "[you] don't even know what". That seems quite rude, but I'll give you the benefit of the doubt.

Maybe if you could name one of those potential opportunities, it'd help ground the discussion in the way that you seem to want?

Like, let's say that additional volume means one can do more efficient batching within a given latency envelope. That's an obvious scale-based efficiency. But a fuller batch isn't actually valuable in itself: it's only valuable because it allows you to serve more queries.

But why? In the world you're positing where these queries are sold at negative margins and don't provide any other tangible benefit (i.e. cannot be used for training), the provider would be even better off not serving those queries. Or, more likely, they'd raise prices such that this traffic has positive margins, and they receive just enough for optimal batching.

> You can claim these opportunities aren't enough, but you don't seem to be willing to do so.

Why I would claim that? I'm not saying that scaling is useless. I think it's incredibly valuable. But scale from these specific workloads is only valuable because these workloads are already profitable. If it wasn't, the scarce compute would be better off being spent on one of the other compute sinks I listed.

(As an example, getting more volume to more efficiently utilize the demand troughs is pretty obviously why basically all the major providers have some sort of batch/off-peak pricing plans at very substantial discounts. But it's not something you'd see if their normal pricing had negative margins.)

> Engineering opportunities at volume and high skill allow changing the margin in ways low volume and low capitalization provider cannot.

My point is that not all volume is the same. Additional volume from users whose data cannot be used to improve the system and who are unprofitable doesn't actually provide any economies of scale.

> 2. Again, even if tokens are unprofitable at scale (which I doubt),

If you doubt they're unprofitable at scale, it seems you're saying that they're profitable at scale? In that case I'd think we're actually in violent agreement. Scaling in that situation will provide a lot of leverage.

lmeyerov•10h ago
> But why? In the world you're positing where these queries are sold at negative margins and don't provide any other tangible benefit (i.e. cannot be used for training), the provider would be even better off not serving those queries. Or, more likely, they'd raise prices such that this traffic has positive margins, and they receive just enough for optimal batching. ... But scale from these specific workloads is only valuable because these workloads are already profitable

I'm disputing this two-fold:

- Software tricks like batching and hardware like ASICs mean what is negative/neutral for a small or unoptimized provider is eventually positive for a large, optimized provider. You keep claiming they cannot do this with positive margin some reason, or only if already profitable, but those are unsubstantiated claims. Conversely, I'm giving classic engineering principles why they can keep driving down their COGS to flip to profitability as long as they have capital and scale. This isn't selling $1 for $0.90 because there is a long way to go before their COGS are primarily constrained by the price of electricity and sand. Instead of refuting this... You just keep positing that it's inherently negative margin.

In a world where inference consumption just keeps going up, they can keep pushing the technology advantage and creating even a slight positive margin goes far. This is the classic engineering variant of buttoning margins before an IPO: if they haven't yet, it's probably because they are intentionally prioritizing market share growth for engineering focus vs cost cutting.

- You are hyper fixated on tokens, and not that owning a large % of distribution lets them sell other things . Eg, instead of responding to my point 2 here, you are again talking about token margin. Apple doesn't have to make money on transistors when they have a 30% tax on most app spend in the US.

lmeyerov•10h ago
Maybe this is the disconnect for the token side: you seem to think they can't keep improving the margin to reach profitability. They are static and it will just get worse, not better.

I think deepseek instead just showed they haven't really bothered yet. They rather focus on growing, and capital is cheap enough for these firms that optimizing margins is relatively distracting. Obviously they do optimize, but probably not at the expense of velocity and growth.

And if they do seriously want to tackle margins, they should pull a groq/Google and go aggressively deep. Ex: fab something. Which... They do indeed fund raise on.

jsnell•9h ago
No, it feels more like the disconnect is that I think they're all compute-limited and you maybe don't? Almost every flop they use to serve a query at a loss is a flop they didn't use for training, research, or for queries that would have given them data to enable better training.

Like, yes, if somebody has 100k H100s and are only able to find a use for 10k of them, they'd better find some scale fast; and if that scale comes from increasing inference workloads by 10x, there's going to be efficiencies to be found. But I don't think anyone has an abundance of compute. If you've instead got 100k H100s but demand for 300k, you need to be making tradeoffs. I think loss-making paid inference is fairly obviously the worst way to allocate the compute, so I don't think anyone is doing it at scale.

> I think deepseek instead just showed they haven't really bothered yet.

I think they've all cared about aggressively optimizing for inference costs, though to varying levels of success. Even if they're still in a phase where they literally do not care about the P&L, cheaper costs are highly likely to also mean higher throughput. Getting more throughput from the same amount of hardware is valuable for all their use cases, so I can't see how it couldn't be a priority, even if the improved margins are just a side effect.

(This does seem like an odd argument for you to make, given you've so far been arguing that of course these companies are selling at a loss to get more scale so that they can get better margins.)

> - You are hyper fixated on tokens, and not that owning a large % of distribution lets them sell other things . Eg, instead of responding to my point 2 here, you are again talking about token margin. Apple doesn't have to make money on transistors when they have a 30% tax on most app spend in the US.

I did not engage with that argument because it seemed like a sidetrack from the topic at hand (which was very specifically the unit economics of inference). Expanding the scope will make convergence less likely, not more.

There's a very good reason all the labs are offering unmonetized consumer products despite losing a bundle on those products, but that reason has nothing at all to do with whether inference when it is being paid for is profitable or not. They're totally different products with different market dynamics. Yes, OpenAI owning the ChatGPT distribution channel is vastly valuable for them long-term, which is why they're prioritizing growth over monetization. That growth is going to be sticky in a way that APIs can't be.

Thanks, good discussion.

lmeyerov•3h ago
I agree they are compute limited and disagree that they are aggressively optimizing. Many small teams are consistently showing many optimization gain opportunities all the way from app to software to hardware, and deepseek was basically just one especially notable example of many. In my experience, there are levels of effort to get corresponding levels of performance, and with complexity slowdowns on everyone else, so companies are typically slow-but-steady here, esp when ZIRP rewards that (which is still effectively in place for OpenAI). Afaict OpenAI hasn't been pounding on doors for performance people, and generally not signalling they go hard here vs growth.

Re: Stickiness => distribution leadership => monetization, I think they were like 80/20 on UI vs API revenue, but as a leader, their API revenue is still huge and still growing, esp as enterprise advance from POCs. They screwed up the API market for coding and some others (voice, video?), so afaict are more like "one of several market share leaders" vs "leading" . So the question becomes: Why are they able to maintain high numbers here, eg, is momentum enough so they can stay tied in second, and if they keep lowering costs, stay there, and enough so it can stay relevant for more vertical flows like coding? Does bundling UI in enterprise mean they stay a preferred enterprise partner? Etc . Oddly, I think they are at higher risk of losing the UI market more so than the API market bc an organizational DNA change is likely needed for how it is turning into a wide GSuite / Office scenario vs simple chat (see: Perplexity, Cursor, ...). They have the position, but it seems more straightforward for them to keep it in API vs UI.

solarkraft•1h ago
> Engineering opportunities at volume and high skill allow changing the margin in ways low volume and low capitalization provider cannot

Everything depends on this actually being possible and I haven’t seen a lot of information on that so far.

DeepSeek‘s publication suggests that it is possible - specifically there was recently a discussion on batching. Google might have some secret sauce with their TPUs (is that why Gemini is so fast?)

And there are still Cerebras and Groq (why haven’t they taken over the world yet?), but their improvements don’t appear to be scale dependent.

Speculating that inference will get cheaper in the future might justify selling at a loss now to at least gain mind share, I guess.

mvdtnz•15h ago
What evidence do you have that there's decent margin on the APIs?
tonyhart7•14h ago
subscription model is just there to serve B2C side of business which in turn them into B2B side

antrophic said themselves that enterprise is where the money at, but you cant just serve enterprise on the get go right

this is where the B2C indirect influence comes

brookst•13h ago
Citation needed.

Model providers spend a ton of money. It is unclear if they will ever have high margins. Today they are somewhere between zero and negative big numbers.

hshdhdhj4444•14h ago
Even to the extent that’s true, that doesn’t seem to be the issue here.

OpenAI is acquiring Windsurf which is its most direct competitor.

selcuka•14h ago
True. Otherwise Anthropic would cut access to other code assistants too as they all compete with Claude Code.
SoftTalker•13h ago
They might still. Why not?

Illustrates a risk of building a product with these AI coding tools. If your developers don't know how to build applications without using AI, then you're at the mercy of the AI companies. You might come to work one day and find that accidentally or deliberately or as the result of a merger or acquisition that the tools you use are suddenly gone.

pbh101•13h ago
This is true of any SaaS vendor
selcuka•10h ago
> If your developers don't know how to build applications without using AI, then you're at the mercy of the AI companies.

The same can be said if your developers don't know how to build applications:

- without using syntax highlighting ...

- without using autocomplete ...

- without using refactoring tools ...

- without using a debugger ...

Why do we not care about those? Because these are commodity features. LLMs are also a commodity now. Any company with a few GPUs and bandwidth can deploy the free DeepSeek or QwQ models and start competing with Anthropic/OpenAI. It may or may not be as good as Claude 4, but it won't be a catastrophe either.

brookst•13h ago
I 100% agree with you except your framing makes it sound like the model providers are doing something wrong.

If I spend a ton of money money making the most amazing ceramic dinner plates ever and sell them to distributors for $10 each, and one distributor strikes gold in a market selling them at $100/plate, despite adding no value beyond distribution… hell yeah I’m cutting them off and selling direct.

I don’t really understand how it’s possible to see that in moral terms, let alone with the low-value partner somehow a victim.

pbh101•13h ago
I don’t think it is at all clear that Windsurf adds zero value. Why do you think this is a helpful analogy?
osigurdson•13h ago
The analogy is a bit like this. Imagine that there are 100 ceramic dinner plates for $6 each. Now someone comes in and buys them from you for $5 each - undercutting your margin. Then a 3rd company comes in and literally eats your lunch on your own ceramic dinner plates. The moral of the story is any story involving ceramic dinner plates is a good one, regardless of the utility of any analogy.
widdakay•15h ago
These are two different products. It's like SpaceX launching satellites for competitive satellite internet services. They didn't care that they were providing launch capabilities for a competitor and neither should Anthropic. What if Apple stopped allowing you to use an iPhone if you worked at Google?
thrdbndndn•15h ago
I think the nature of your two examples, along with the one from the news, is too different from each other for the analogy to really hold. These situations can only be judged on a case-by-case basis.
tonyhart7•14h ago
"They didn't care that they were providing launch capabilities for a competitor"

Yeah because another internet provider did not have SpaceX reusable rocket technology

its not really quite the same you know

killerstorm•14h ago
Do you understand that data can be used for training?
paxys•13h ago
If SpaceX was launching every rocket at a loss then they would absolutely care about a competitor taking advantage of it.
demosthanos•13h ago
> What if Apple stopped allowing you to use an iPhone if you worked at Google?

This wouldn't be remotely comparable. This is targeting of a competitor's employees, not targeting a competitor's subsidiaries.

If you want to go the Apple-Google route a better comparison would be that this is like Apple refusing to allow you to hook up an Air Tag on an Android phone. Which is something that they do, in fact, do.

danpalmer•8h ago
Another way this isn't comparable is that the training data is critical for the services each provides. Seeing answers Claude gives (at scale, for free) would be a huge competitive advantage to OpenAI, whereas Apple Mail seeing the email you send via Gmail wouldn't convey any competitive advantage in email, for example.
dboreham•15h ago
Probably most here are not old enough to remember, but there was a time when Google had all sorts of data access APIs and encouraged developers to create applications that used said APIs. And then they disabled them all.
bitpush•15h ago
How is that relevant?
8note•14h ago
api openness is temporary - dont build a business on it
pton_xd•15h ago
I'm sure this all fits neatly into their EA "maximizing positive outcomes for humanity" world-view somehow.
CyberMacGyver•14h ago
It could be argued from Antrhopics perspective that OpenAI and Worldcoin are not positive for humanity so this in fact necessary
dmix•14h ago
I don't think EA people are pro going out of business so you don't have money.
bravesoul2•13h ago
Can't save humanity from the heat death of the universe without a castle.
tuyguntn•15h ago
I don't know what kind of agreement they had, or any agreement at all, but with this move, Anthropic is showing itself as an unreliable provider.

It's same as: We can cut access anytime, "I think it would be odd for us to be selling Claude to <YOUR_COMPANY>"

charlie0•15h ago
https://www.businessinsider.com/restaurant-accused-reselling...
mountainriver•14h ago
This should be a clear signal to the community that anthropic can’t be trusted. OpenAI can’t either, TBD on google
alehlopeh•14h ago
TBD on whether Google can be trusted? That ship sailed long ago.
mountainriver•14h ago
To steal markets? Honestly I can’t think of one but someone can correct me

I certainly don’t trust them to not kill whole products on demand

k__•14h ago
When I read about Cursor and Windsurf, I'm quite happy that I only use Aider. An open source project, not associated with anyone else...
bravesoul2•13h ago
Curser and Windserf

Yeah better to go with open technologies. Maybe use Groq for inference knowing you can switch over later if needed as you are using Llama or Deepseek.

wincy•14h ago
Wow excited for when Anthropic buys Cursor in a couple months and I get locked out of using OpenAI models with it. This is depressing. I just want stuff to work.
OsrsNeedsf2P•14h ago
This is why I use Aider. Open source or GTFO
TiredOfLife•2h ago
And then Anthropic cuts api access to Aider
deadbabe•14h ago
Aren’t the OpenAI models worse than the alternatives?
dmix•14h ago
The difference is likely very marginal in practice for what most people are doing. o3 and 4o do programming just fine.
jerpint•14h ago
That may be true today but not in a few weeks, ideally you have access to all models whenever
selcuka•14h ago
Anthropic models are slightly better for coding tasks anyway. I believe Windsurf is in more trouble.
brookst•13h ago
Friendly advice: you will be happier if you defer reacting to things until they are actually real.

We can all imagine all sorts or terrible futures. Many of us do. But there is no upside in being prematurely outraged.

paxys•13h ago
If LLMs were a sustainable business then Anthropic would have no problem selling Claude to a competitor. Heck they'd brag about it ("see OpenAI uses our models to code instead of their own!"). You see this in the industry all the time. Large tech companies compete with each other and sue each other and have deep business relationships at the same time.

What this change really says is that Anthropic doesn't want to burn VC money to help a competitor. And that is the reality of "I just want stuff to work". It won't just work because there's no stable business underneath it. Until that business can be found things will keep changing and get shittier.

demosthanos•13h ago
Does anyone here actually use OpenAI models in Cursor? I kind of forgot they were even there. I've just been alternating between Claude and Gemini, and the sense I've had in online discussions is that that's pretty normal.
34679•13h ago
Void is basically the same thing, but open source and better. It's easy to use with any provider API key, even LM Studio for local models. You can use it with free models available from OpenRouter to try it out, but the quality of output is just as dependent on the model as any other solution.

https://voideditor.com/

https://github.com/voideditor/void

zwarag•7h ago
People like Windsurf and Cursor because they offer them a flatrate.
34679•5h ago
Yeah, for sure. That's why I tried Cursor for a month. But as soon as I ran out of fast requests it became unusable. It had me waiting several minutes for the first token. I didn't realize how bad the experience was, fast requests included, until I used Void. It makes Cursor fast requests seem slow, and I tend to need fewer of them. The differences being that Void requests go straight to the provider instead of their own server first, and their system prompts seem to keep the models more on task.
demosthanos•3h ago
How much do you spend per month on Void, though? Your testimonial is great but incomplete without that information.
solumunus•8h ago
Why would they when Claude Code is night and day better? It makes Cursor look like a joke in my experience.
HyprMusic•14h ago
This feels like a cheap trick to drive users towards Claude Code. It's likely no coincidence that this happened at the same time they announced subscription access to Claude Code.

The Windsurf team repeatedly stated that they're running at a loss so all this seems to have achieved is giving OpenAI an excuse to cut their 3rd party costs and drive more Windsurf users towards their own models.

ramesh31•14h ago
Nobody is actually using Windsurf. It was an acquihire and a squashing of competition that caught ground in the enterprise contract market really early. Anyone doing agentic coding seriously is using open source tooling with direct token pricing to the major model providers. Windsurf/Cursor/et. al are just expensive middlemen with no added value.
peterhadlaw•14h ago
Which open source agentic tooling are you using. I'm a fan of Aider but I find it lacking the agentic side of things. I've looked at Goose, Plandex, Opencode, and etc. Which do you like?
ramesh31•14h ago
Cline all the way: https://cline.bot/.

Haven't found anything else that even comes close.

peterhadlaw•13h ago
Dang, was hoping for something terminal based <3 but thank you
TiredOfLife•2h ago
If nobody is using then why cut off access?
artdigital•14h ago
> This feels like a cheap trick to drive users towards Claude Code

How did you come to this conclusion? It’s very much like he remarked: OpenAI acquired Windsurf, OpenAI is Anthropics direct competitor.

It doesn’t make strategic sense to sell Claude to OpenAI. OpenAI could train against Claude weights, or OpenAI can cut out Anthropic at any moment to push their own models.

The partnership isn’t long lasting so it doesn’t make sense to continue it.

selcuka•14h ago
> OpenAI could train against Claude weights

OpenAI can always buy a Claude API subscription with a credit card if they want to train something. This change only prevents the Windsurf product from offering Claude APIs to their customers.

brookst•13h ago
Other than, you know, terms and contracts.
selcuka•10h ago
But that's a completely different story, no? Cutting off Windsurf has nothing to do with enforcing that T&C.
huxley•13h ago
Totally irrelevant, Anthropic isn’t cutting off OpenAI, it is cutting off Windsurf users.
artdigital•13h ago
Windsurf users can still plug their own Anthropic key and continue using the models. It’s Windsurf subscribers (eg OpenAI customers) that use the models through the Windsurf service (through their servers as proxy, that’s now OpenAI) are getting cut off

I don’t see how this is irrelevant. Windsurf is a first party product of their most direct competitor. Imagine a car company integrating the cloud tech of a different manufacturer

HyprMusic•6h ago
Exactly, it's like car manufacturers cutting off Android because Google own Waymo. The only people that pay are the consumers.
bravesoul2•14h ago
Antitrust?

Maybe GitHub and Microsoft should kick out all competing company 3rd party integrations.

See where this leads...

arnaudsm•13h ago
We are about to enter the era of aggressive LLM monetisation and anti-competitivness. We got used to cheap subsidized models, in bet in 2 years we'll pay double for the same service.
jmward01•13h ago
Decisions like this may shoot them in the foot later as opensource and cheaper compute continues to push into frontier model territory. I know I have no loyalty to the big providers and would take a minor quality/cost hit to jump off. Right now it isn't a minor quality / cost hit though. Knowing that they can cut you off if they think you are going to end up competing with them makes me tolerate an even bigger gap.