frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Beginning of Scarcity in AI

https://tomtunguz.com/ai-compute-crisis-2026/
29•gmays•1h ago

Comments

Lapalux•1h ago
"The first hit is free....."
stupefy•55m ago
What limits LLM inference accelerators? I heard about Groq (https://groq.com/) not sure how much it pushes away the problem.
vessenes•52m ago
ASML only makes a certain number of machines a year that can do extreme ultra-violet lithography.

Also - turbine blades limit power, according to Elon.

Between them - we cannot chip fabs past a certain rate, and we cannot stand up the datacenter to run these desired chips past a certain rate. Different people believe one or the other is the 'true' current bottleneck. The turbine supply chain scaling looks much more tractable -- EUV is essentially the most complicated production process humans have ever devised.

ls612•48m ago
Presumably ASML can increase production if demand is high enough the question is over what time frame. 5 years seems plausible to me but I honestly don't know what that number is.
vessenes•46m ago
It's ... really long, according to Dylan Patel on the Dwarkesh Podcast. The supply chain is extremely deep and complex.
juliansimioni•26m ago
Yes. And the fab companies and their suppliers are deliberately and wisely slow to scale up production to meet short term changes in demand. They've seen the history of the semiconductor industry, it's constant boom and bust cycles. But they have the highest op-ex costs of anyone. So when the party's over they are the ones who pay for it the most.
zozbot234•34m ago
You don't really need EUV for reasonable hardware, you can use DUV and scale up your design effort with things like multiple patterning, etc. to more closely approximate EUV outcomes. Sure, your compute per watt figures will suffer from this but if AI compute is as profitable as it's claimed to be, that's still a viable approach.
andai•22m ago
Is global compute bottlenecked by one company?
Miraste•4m ago
If only there were some form of cheap, widely manufactured power generation technology that didn't use turbines... Are they really going to wait until 2030 to get more turbines rather than invest in solar?
vessenes•54m ago
It seems very possible that we have at least five years of real limitations on compute coming up. Maybe ten, depending on ASML. I wonder what an overshoot looks like. I also wonder if there might be room for new entrants in a compute-scarce environment.

For instance, at some point, could Coreweave field a frontier team as it holds back 10% of its allocations over time? Pretty unusual situation.

dist-epoch•11m ago
Jensen just said that if the signal/commitments are there, ASML can scale in 2-3 years.
mattas•54m ago
This notion that "we don't have enough compute" does not cleanly reconcile with the fact that labs are burning cash faster than any cohort of companies in history.

If I am a grocery store that pays $1 for oranges and sells them for $0.50, I can't say, "I don't have enough oranges."

earthnail•49m ago
If there were more oranges you’d pay less to buy them and your economics would work out.
0x3f•41m ago
Not sure if this is a joke or not, but competitive pressure still exists. This only really holds if you're the only orange seller.
vessenes•48m ago
You misunderstand.

"I built a ship to go to the Indies and bring back tea."

"Bro, the ship cost 100,000 pounds sterling and only brought back 50,000 pounds of tea. I don't care if you paid 12,500 pounds for the tea itself, you're losing money."

There is a very rational reason labs are spending everything they can get for more compute right now. The tea (inference) pays 60%+ margins. And that is rising. And that number is AFTER hyper scalars make their margins. There is an immense amount of profit floating around this system, and strategics at the edge believing they can build and control the demand through combined spend on training and inference in the proper ratios.

SpicyLemonZest•29m ago
60%+ margins according to numbers which are not published publicly and have not AFAICT been audited.

Could they be accurate? Sure, I think people who claim this is impossible are overconfident. But I would encourage anyone who assumes they must be right to read a history of the Worldcom scandal. It's really quite easy for a person who wants to be making money (or an LLM who's been instructed to "run the accounts make no mistakes"!) to incorrectly categorize costs as capital investments when nobody's watching carefully.

FloorEgg•44m ago
There is a major logic flaw in what you're saying.

'If I am a grocery store that pays $1 for oranges and sells them for $0.50, I can't say, "I don't have enough oranges."'

How about 'if I'm a grocery store and I see no limit on demand for oranges at $.50 but they are currently $1, I can say 'if oranges were cheaper I could sell orders of magnitude more of them'.

Buying oranges for $1 and selling for $0.5 is an investment into acquiring market share and customer relationships and a gamble on the price of oranges falling in the future.

0x3f•39m ago
> acquiring market share and customer relationships

The whole setup rests on this, and it seems mythical to me. These guys have basically equivalent products at this point.

TeMPOraL•9m ago
You can if you're exhausting the global production of oranges.
isawczuk•47m ago
It's artificial scarcity. LLM inference will soon be commodity as cloud.

There is a 2-3years still before ASIC LLM inferences will catch up.

vessenes•44m ago
I don't think so. GB200 prices are GOING UP. A100s are still expensive. This implies massive utilization and demand, no? These machines are not sitting idle, or prices would drop in the very competitive hyperscaler environment.
observationist•29m ago
The problem with this idea is that someone can, and likely will, come up with the next best architecture that leapfrogs the current frontier models at least once a year, likely faster, for the foreseeable future. This means by the time you've manufactured your LLM on an ASIC, it's 4-5 generations behind, and probably much less efficient than current SOTA model at scale.

It won't make sense for ASIC LLMs to manifest until things start to plateau, otherwise it'll be cheaper to get smarter tokens on the cloud for almost all use cases.

That said, a 10 trillion parameter model on a bespoke compute platform overcomes a lot of efficiency and FOOM aspects of the market fit, so the angle is "when will models that can be run on an asic be good enough that people will still want them for various things even if the frontier models are 10x smarter and more efficient"

I think we're probably a decade of iteration on LLMs out, at least, and the entire market could pivot if the right breakthrough happens - some GPT-2 moment demonstrating some novel architecture that convinces the industry to make the move could happen any time now.

dmazin•44m ago
Constraints can lead to innovation. Just two things that I think will get dramatically better now that companies have incentive to focus on them:

* harness design

* small models (both local and not)

I think there is tremendous low hanging fruit in both areas still.

com2kid•28m ago
China already operates like this. Low cost specialized models are the name of the game. Cheaper to train, easy to deploy.

The US has a problem of too much money leading to wasteful spending.

If we go back to the 80s/90s, remember OS/2 vs Windows. OS/2 had more resources, more money behind it, more developers, and they built a bigger system that took more resources to run.

Mac vs Lisa. Mac team had constraints, Lisa team didn't.

Unlimited budgets are dangerous.

cesarvarela•25m ago
Harness is a big one, Claude Code still has trouble editing files with tabs. I wonder how many tokens per day are wasted on Claude attempting multiple times to edit a file.
dataviz1000•15m ago
What do you mean by harness here?
Ifkaluva•8m ago
When you go to the command line and type “Claude”, there is an LLM, and everything else is the harness
codybontecou•1m ago
pi vs. claude code vs. codex These are all agent harnesses which run a model (in pi's case, any model) with a system prompt and their own default set of tools.
czk•36m ago
"adaptive" thinking
itmitica•36m ago
The current inference system is on a down slope.

It remains to be seen what new wave of AI system or systems will replace it, making the whole current architecture obsolete.

Meanwhile, they are milking it, in the name of scarcity.

henry2023•34m ago
The US is bound by energy and China is bound by compute power. The one who solves its limitation first will end this “Scarcity Era”.
jakeinspace•26m ago
China is installing something like 500 GW of wind and solar per year now. Even if they're only able to build and otherwise access chips that have half the SoTA performance per watt, they will win.
odo1242•8m ago
Performance per dollar may be more important than performance per watt here, though
CuriouslyC•13m ago
The dynamics vastly favor China, part of the reason the US sprinting towards "ASI" isn't totally boneheaded is that the US and its industry needs a hail mary play to "win" the game, if they play it safe they lose for sure.
leptons•10m ago
I'd be fine with a world without AI, honestly. Nobody really wins this race except the very wealthy. And I don't think it's really going to play out the way the wealthy think it will. It's more like a dog catching a car than it is a race.
odo1242•5m ago
> It's more like a dog catching a car than it is a race.

What does this mean? I didn't understand the analogy.

Miraste•8m ago
China's domestic chips are increasingly close to state-of-the-art. The US electrical grid is... not.
com2kid•31m ago
To bang on the same damn drum:

Open Weight models are 6 months to a year behind SOTA. If you were building a company a year ago based on what AI could do then, you can build a company today with models that run locally on a user's computer. Yes that may mean requiring your customers to buy Macbooks or desktops with Nvidia GPUs, but if your product actually improves productivity by any reasonable amount, that purchase cost is quickly made up for.

I'll argue that for anything short of full computer control or writing code, the latest Qwen model will do fine. Heck you can get a customer service voice chat bot running in 8GB of VRAM + a couple gigs more for the ASR and TTS engine, and it'll be more powerful than the hundreds of millions spent on chat bots that were powered by GPT 4.x.

This is like arguing the age of personal computing was over because there weren't enough mainframes for people to telnet into.

It misses the point. Yes deployment and management of personal PCs was a lot harder than dumb terminal + mainframe, but the future was obvious.

zozbot234•24m ago
The real advantage of Open Weight models from a compute scarcity POV is that they're repurposing the compute users need to have around anyway for their own use. That's great but it's also limited in scope. There's only so many engineering/architecture/Gfx special effects workstations around that can now run reasonable mid-sized models "for free" during downtime because they had to be available already for other uses. Everything else will only increase the scarcity, not redress it, unless you only expect users to run very small or very slow models.
space_fountain•23m ago
I've seen this claimed, but I'm not sure it's been true for my use cases? I should try a more involved analysis but so far open models seem much less even in their skills. I think this makes sense if a lot of them are built based on distillations of larger models. It seems likely that with task specific fine tuning this is true?
com2kid•15m ago
What are you trying to do?

Write code? No. Use frontier models. They are subsidized and amazing and they get noticably better ever few months.

Literally anything else? Smaller models are fine. Classifiers, sentiment analysis, editing blog posts, tool calling, whatever. They go can through documents and extract information, summarize, etc. When making a voice chat system awhile back I used a cheap open weight model and just asked it "is the user done speaking yet" by passing transcripts of what had been spoken so far, and this was 2 years ago and a crappy cheap low weight model. Be creative.

I wouldn't trust them to do math, but you can tool call out to a calculator for that.

They are perfectly fine at holding conversations. Their weights aren't large enough to have every book ever written contained in them, or the details of every movie ever made, but unless you need that depth and breadth of knowledge, you'll be fine.

dist-epoch•10m ago
Buy new Macs from where? There is a shortage of RAM, SSD, GPUs, and the CPU shortage just started.
paulddraper•28m ago
This is wrong along multiple axes.

1. Supply can scale. You can point to COVID/supply-chain shocks, but the problem there is temporary changes. No one spins up a whole fab to address a 3 month spike. Whereas AI is not a temporary demand change.

2. Models are getting more efficient. DeepSeek V3 was 1/10th the cost of contemporary ChatGPT. Open weight models get more runnable or smarter every month. Cutting edge is always cutting edge, but if scarcity is real, model selection will adjust to fit it.

byyoung3•25m ago
distillation is an equalizing force
yalogin•22m ago
Does this also mean ram prices are not coming down anytime soon?
stronglikedan•12m ago
they already are
dist-epoch•12m ago
yes, and it will keep increasing
wg0•8m ago
There's other side to it too.

Whoever running and selling their own models with inference is invested into the last dime available in the market.

Those valuations are already ridiculously high be it Anthropic or OpenAI to the tune of couple of trillion dollars easily if combind.

All that investment is seeking return. Correct me if I'm wrong.

Developers and software companies are the only serious users because they (mostly) review output of these models out of both culture and necessity.

Anywhere else? Other fields? There these models aren't any useful or as useful while revenue from software companies by no means going to bring returns to the trillion dollar valuations. Correct me if I'm wrong.

To make the matter worst, there's a hole in the bucket in form of open weight models. When squeezed further, software companies would either deploy open weight models or would resort to writing code by hand because that's a very skilled and hardworking tribe they've been doing this all their lives, whole careers are built on that. Correct me if I'm wrong.

Eventually - ROI might not be what VCs expect and constant losses might lead to bankruptcies and all that build out of data centers all of sudden would be looking for someone to rent that compute capacity result of which would be dime a dozen open weight model providers with generous usage tiers to capitalize on that available compute capacity owners of which have gone bankrupt and can't use it any more wanting to liquidate it as much as possible to recoup as much investment as possible.

EDIT: Typos

Show HN: NoFS – What if files are just projections, graph is the truth?

https://nofs.ai/
1•mmethodz•2m ago•0 comments

Casus Belli Engineering

https://marcosmagueta.com/blog/casus-belli-engineering/
1•schonfinkel•9m ago•0 comments

Closure of Radio 4 on Long Wave (LW)

https://www.bbc.co.uk/reception/work-warning/news/radio4lw
1•austinallegro•10m ago•1 comments

Grpo explained: group relative policy optimization for LLM finetuning

https://cgft.io/learn/grpo-intro/
1•kumama•11m ago•0 comments

U.S. to Create High-Tech Manufacturing Zone in Philippines

https://www.wsj.com/world/asia/u-s-to-create-high-tech-manufacturing-zone-in-philippines-017c1668
4•dcgudeman•13m ago•0 comments

15% of Reddit Posts are Likely AI-generated in 2025

https://originality.ai/blog/ai-reddit-posts-study
2•akyuu•13m ago•0 comments

Street Fighter 2026 Trailer

https://www.youtube.com/watch?v=gX0Btbbddxk
2•havblue•16m ago•1 comments

Reed Hastings is leaving Netflix after 29 years

https://www.engadget.com/entertainment/streaming/reed-hastings-is-leaving-netflix-after-29-years-...
2•andsoitis•16m ago•0 comments

Helpful translations from British English (2015)

https://www.independent.co.uk/news/uk/home-news/chart-shows-what-british-people-say-what-they-rea...
1•worik•17m ago•1 comments

Unicorn Market Cap 2026: SF Is the GenAI Super Cluster

https://blog.eladgil.com/p/unicorn-market-cap-2026-sf-is-the
1•gmays•19m ago•0 comments

Ollama v0.21.0-Rc0

https://github.com/ollama/ollama/releases/tag/v0.21.0-rc0
1•maxloh•19m ago•0 comments

Release PiClaw v1.8.0 – This Is Spinal Tap

https://github.com/rcarmo/piclaw/releases/tag/v1.8.0
2•rcarmo•20m ago•0 comments

Could AI's leading men become as powerful as Ford or Rockefeller?

https://www.economist.com/business/2026/04/16/could-ais-leading-men-become-as-powerful-as-ford-or...
1•andsoitis•21m ago•0 comments

New unsealed records reveal Amazon's price-fixing tactics, California AG claims

https://www.theguardian.com/us-news/ng-interactive/2026/apr/16/amazon-price-fixing-california-law...
4•kmfrk•21m ago•1 comments

Data Science Weekly – Issue 647

https://datascienceweekly.substack.com/p/data-science-weekly-issue-647
2•sebg•22m ago•0 comments

First trailer released for western starring AI version of Val Kilmer

https://www.theguardian.com/film/2026/apr/16/first-trailer-released-for-ai-val-kilmer-western
2•bookofjoe•22m ago•0 comments

Visualizing 100k prime numbers in 3D

https://joshumax.github.io/beautiful-prime-numbers/
1•joshumax•24m ago•0 comments

Free instant WCAG 2.2 accessibility audit

https://webpossum.com
1•raphaelheide•24m ago•0 comments

How to Deconstruct Almost Anything (1993)

http://www.fudco.com/chip/deconstr.html
1•pocksuppet•26m ago•0 comments

Show HN: Tracking Top US Science Olympiad Alumni over Last 25 Years

https://www.perplexity.ai/computer/a/us-olympiad-tracker-__5Gzx3tQaKOInGlalN8sQ
2•bkls•27m ago•0 comments

A jury declared Live Nation a monopoly. But ticket prices won't drop just yet

https://text.npr.org/nx-s1-5787491
1•mooreds•29m ago•0 comments

The MacBook Neo Guide

https://randsinrepose.com/archives/the-macbook-neo-guide/
3•mooreds•30m ago•0 comments

Red hair&fair skin favored by natural selection last 10k years: vit D production

https://www.theguardian.com/science/2026/apr/16/red-hair-gene-favoured-natural-selection-study
2•bookofjoe•30m ago•0 comments

Guy builds AI driven hardware hacker arm from duct tape, old cam and CNC machine

https://github.com/gainsec/autoprober
24•scaredpelican•33m ago•2 comments

Worm's-Eye View

https://en.wikipedia.org/wiki/Worm%27s-eye_view
2•signorovitch•35m ago•0 comments

Machine Learning Operations on ZYNQ FPGA Board for Real-Time Face Recognition

https://www.mdpi.com/2571-5577/9/4/71
1•PaulHoule•35m ago•0 comments

Objection – The AI Tribunal of Truth

https://objection.ai/
1•_DeadFred_•36m ago•0 comments

'Fireproof' batteries create their own internal firewall when the heat is on

https://newatlas.com/energy/fireproof-batteries-internal-firewall/
2•breve•36m ago•0 comments

A practical guide to Git worktrees

https://harness.mikelyons.org/guide.html
1•frenchie4111•37m ago•0 comments

Show HN: Talk to all your agents in one place

https://github.com/Potarix/agent-hub
1•YoungGato•37m ago•0 comments