frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open WebSearch

https://openwebsearch.eu/
1•jruohonen•1m ago•0 comments

Ask HN: How do you prepare for investor meetings?

1•uchibeke•5m ago•0 comments

Teen's live-streamed suicide set off exhaustive search for 'White Tiger'

https://www.washingtonpost.com/investigations/interactive/2025/white-tiger-764-fbi-search/
1•aspenmayer•6m ago•2 comments

Chess but it's Battle Royale [video]

https://www.youtube.com/watch?v=CONozr99agQ
1•amichail•8m ago•0 comments

Future small modular reactors that will power AI, cloud services in Pacific NW

https://www.aboutamazon.com/news/sustainability/amazon-smr-nuclear-energy
2•rntn•9m ago•0 comments

Building Decentralized Workflows That Scale

https://www.dbos.dev/blog/scaleable-decentralized-workflows
1•KraftyOne•11m ago•0 comments

How to Build an Agent

https://samdobson.uk/posts/how-to-build-an-agent/
1•tech4bueno•13m ago•0 comments

Flashlights and Lighthouses for Learning AI

https://substack.com/inbox/post/176440211
1•mathattack•14m ago•0 comments

Nival has released the source code for "Blitzkrieg 2" to the public

https://wnhub.io/news/other/item-48930
1•birdculture•16m ago•0 comments

Asking AI to build scrapers should be easy right?

https://www.skyvern.com/blog/asking-ai-to-build-scrapers-should-be-easy-right/
1•suchintan•16m ago•0 comments

Ace Frehley Has Died

https://www.nytimes.com/2025/10/16/arts/music/ace-frehley-dead.html
8•brudgers•19m ago•0 comments

Food Portion Sizes

https://www.livewelldorset.co.uk/articles/measuring-portion-sizes/
1•dzonga•24m ago•0 comments

Show HN: Lightning-SimulWhisper: Real-Time ASR for Apple Silicon

https://github.com/altalt-org/Lightning-SimulWhisper
1•predict-woo•25m ago•0 comments

Show HN: Medicated Emacs, Work-Ready Vanilla Emacs

https://github.com/RolandMarchand/medicated-emacs
1•Moowool•25m ago•0 comments

The role of good code blocks in documentation

https://www.mintlify.com/blog/code-block-documentation
2•skeptrune•26m ago•0 comments

Tricolor Collapse Sends Fifth Third on a Hunt for Bad Collateral

https://www.bloomberg.com/news/articles/2025-10-17/fifth-third-house-to-house-search-finds-just-t...
1•zerosizedweasle•28m ago•0 comments

Compiler optimizations for 5.8ms GPT-OSS-120B inference (not on GPUs)

https://furiosa.ai/blog/serving-gpt-oss-120b-at-5-8-ms-tpot-with-two-rngd-cards-compiler-optimiza...
1•olibaw•28m ago•0 comments

Why Would OpenAI Allow Erotica in ChatGPT Now?

https://www.bloomberg.com/news/newsletters/2025-10-17/why-would-openai-allow-erotica-in-chatgpt-now
2•coloneltcb•30m ago•1 comments

OBS Studio 32.0

https://obsproject.com/blog/obs-studio-32-0-release-notes
1•mikece•34m ago•0 comments

GOG Has Had to Hire Private Investigators to Track Down IP Rights Holders

https://www.thegamer.com/gog-private-investigators-off-the-grid-ip-rights-holders/
8•haunter•35m ago•1 comments

Why Software Quality Disappeared: Culture

https://lukaswerner.com/post/2025-10-14@swe-qc-culture
2•derHackerman•36m ago•0 comments

Dan Bricklin on Building the First Killer App – Learning from Machine Learning

https://mindfulmachines.substack.com/p/dan-bricklin-lessons-from-building
1•splevine•36m ago•1 comments

Wall Street Races to Sell Risky ETFs as Crypto Crash Hits Retail

https://www.bloomberg.com/news/articles/2025-10-17/wall-street-races-to-sell-risky-etfs-as-crypto...
2•zerosizedweasle•37m ago•0 comments

The Weekly Edge: Adieu Kuzu, State of the Graph, NetworkX on Neptune Analytics

https://gdotv.com/blog/weekly-edge-adieu-kuzu-state-of-the-graph-17-october-2025/
1•bwmerklsasaki•38m ago•0 comments

Army Corps of Engineers pausing $11B in projects over shutdown

https://www.cnbc.com/2025/10/17/vought-budget-government-shutdown.html
5•zerosizedweasle•38m ago•0 comments

Petaluma Reusable Cup Initiative

https://returnmycup.com
2•PaulHoule•38m ago•0 comments

Forgejo v13.0 Is Available

https://forgejo.org/2025-10-release-v13-0/
8•birdculture•40m ago•1 comments

Die shots of as many CPUs and other interesting chips as possible

https://commons.wikimedia.org/wiki/User:Birdman86
1•uticus•40m ago•0 comments

IAS physicist discusses research program to unify sciences of mind and matter [video]

https://www.youtube.com/watch?v=zyODcDvkiE0
1•matiasz•48m ago•0 comments

Fingerprint reader Framework expansion card

https://github.com/theowoo/FW-fingerprint-expansion-card
2•LorenDB•48m ago•1 comments
Open in hackernews

OpenAI Needs $400B In The Next 12 Months

https://www.wheresyoured.at/openai400bn/
144•chilipepperhott•1h ago

Comments

alberth•1h ago
Why doesn't Anthropic needs similar levels of capital (or do they)?
chilipepperhott•1h ago
I believe they do, but the author seems to focus on OpenAI since they're more of a household name.
kachapopopow•1h ago
because this is for building "AGI", this has little to nothing to do with their current offerings.

This also assumes that intelligence continues to scale with compute which is not a given.

sillysaurusx•1h ago
> This also assumes that intelligence continues to scale with compute which is not a given.

Isn’t it? Evidence seems to suggest that the more compute you throw at a problem, the smarter the system behaves. Sure, it’s not a given, but it seems plausible.

deadbabe•57m ago
In a brute force poorly architected way, perhaps.

But human brains are small and require far less energy to be very generally intelligent. So clearly, there must be a better way to achieve this AGI shit. Preferably something that runs locally in the palm of your hand.

kachapopopow•49m ago
it's not mathematically proven therefore it is not a given.
nutjob2•48m ago
> a problem

That word is carrying a heavy load. There's no evidence that scaling works indefinitely on this particular sort of problem.

In fact there is no evidence that scaling solves computing problems generally.

In more narrow fields more compute gets better results but that niche is not so large.

B56b•27m ago
No, that's no longer the case: https://www.newyorker.com/culture/open-questions/what-if-ai-...
IsTom•20m ago
It also depends on the amount of training data, that isn't really growing much after they scraped all the internet.
rediguanayum•1h ago
I don't think it's AGI, but rather video production. OpenAI wants to build the next video social network / ads / tv / movie production system. The moat is the massive compute required.
gkoberger•1h ago
I'm sure they're not against building this, and they definitely have competing priorities.

But my personal belief is Sam Altman has a singular goal: AGI. Everything else keeps the lights on.

sho_hn•56m ago
Is there any indication they are actually working on this and Altman is any good at pursuing this goal? I'm seriously asking, please inform the uninformed.

My impression is that I hear a lot more about basic research from the competing high-profile labs, while OpenAI feels focused on their established stable of products. They also had high-profile researchers leave. Does OpenAI still have a culture looking for the next breakthroughs? How does their brain trust rank?

Analemma_•51m ago
Huh, my read is exactly the opposite: Altman wants to be a trillionaire and isn't picky about how he gets there. If AGI accomplishes it, great, but if that's not possible, "just" making a megacorporation which permanently breaks the negotiating power of labor is fine too. Amodei is the one who I think actually wants to build AGI.
thelastgallon•43m ago
I think your read is right. There are a few people who want to be trillionaires and aren't too picky about to get there: Elon Musk, Sam Altman, Trump, Larry Ellison, Peter Thiel, Putin. Maybe Bezos and Zuckerberg.

Of course, there wouldn't be many people who don't want to be trillionaires. Rare exceptions[1]. But these are the people with means to get there.

[1]: No means NO - Do you want a one million dollar answer NO!: https://www.youtube.com/watch?v=GtWC4X628Ek

wkat4242•11m ago
I definitely would not want to be a trillionaire yeah. Having a million or so would be nice but more and you get roped into all kinds of power play and you have to get security goons with you all the time to avoid getting kidnapped. I'd much rather be anonymous.
gkoberger•5m ago
Then why start a company where you have no equity? (Yes I believe he financially benefits from OpenAI, but the more straightforward way would be OpenAI equity)
timeon•43m ago
Isn't that just PR?
JumpCrisscross•51m ago
> this is for building "AGI"

I’m increasingly convinced this is AI’s public relations strategy.

When it comes to talking to customers and investors, AGI doesn’t come up. At fireside chats, AGI doesn’t come up.

Then these guys go on CNBC or whatnot and it’s only about AGI.

evandrofisico•1h ago
Anthropic is more secretive about their costs, Ed Zitron is right now investigating their costs, specifically on GCP
cma•48m ago
Anthropic seems more comfortable using TPUs for overflow capacity. The recent Claude degradation was largely due to a bug from implementation differences with TPUs and from their writeup we got some idea of their mix between Nvidia and TPU for inference.

I'm not sure if OpenAI has been willing to deploy weights to Google infrastructure.

pdmccormick•1h ago
That seems like a lot of money. How quickly can sustainable capacity be built up in terms of building power plants, data center construction, silicon design and fabrication, etc.? Are these industries about to experience stratospheric growth, followed by a massive and painful adjustment, or does this represent a printing press or industrial revolution like inflection point?

Would anyone like to found a startup doing high-security embedded systems infrastructure? Peter at my username dot com if you’d like to connect.

bcrl•1h ago
Almost nothing in tech is sustainable outside of gold recycling.
moralestapia•1h ago
1) What
ares623•53m ago
4
sillysaurusx•1h ago
> Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.”

I’d argue the other way around: 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.

Beware the sigmoidal curve, though. Growth is exponential till it’s not.

baobabKoodaa•49m ago
OpenAI's bottleneck first shifted from GPUs to energy. Next it will shift from energy to meatbags. I'm sure they will figure out some way to produce more of us to keep the growthrate going.
scarmig•12m ago
Eventually, we can replace human consumers with LLM agent consumers, and things can scale indefinitely.
sellmesoap•6m ago
You too can qualify as an "ugly bag of mostly water" just give us your CC number!
slg•40m ago
>100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time

In what way does it suggest that? What level of growth is evidence that a product is universally useful?

alemanek•32m ago
About 10% of the total world population is using it on a weekly basis. Take out those too old or young or illiterate technically or otherwise. Now subtract out the people without reliable internet and computer/phone. That 10% gets a whole lot bigger.

That seems like pretty strong evidence that it is generally, if not universally, useful to everyone given the opportunity.

bdbdkdksk•27m ago
My work is apparently paying for seats in multiple AI tools for everybody. There's a corporate mandate that you "have to use AI for your job". People seem to mostly be using it to for (a) slide decks with cringe images (b) making their PRs look more impressive by generating a bunch of ineffective boilerplate unit tests.
emp17344•4m ago
Only if you believe popularity is the same as usefulness.
B56b•30m ago
Seriously! These two things are laughably far apart. What on earth kind of leap of logic is this?
bee_rider•39m ago
I finally used it for a couple little things, but mostly as a fuzzier replacement for search, where it does do pretty well. Of course nowadays classic search is in shambolic so it is kind like a mediocre prime-aged boxer fighting an 70 year old champion or something.

Anyway, I bet it will be really useful for cool stuff if it can ever run on my laptop!

mattskr•36m ago
Reality check. UNICEF and the WHO say there are 2 billion people without access to clean drinking water. They have slightly more pressuring issues than trying to log into chatgpt. Only slightly.

The blockchain/bitcoin bros tried the same marketing spin. "Bitcoin will end poverty once we get it into everyone's hands." When that started slipping, NFTs will save us all.

Yeah. Sure. Been there. Done that. Just needs "more investment"... and then more... then more... all because of self reported "growth".

adventured•29m ago
Nearly six billion people are using mobile phones, most of those are smartphones now. There's no reason to think extending that small cost utility device to the next billion adults isn't a good idea (so long as the cost isn't coming from their pocket, ie it should be subsidized). These are not at all mutually exclusive goals.

The latest LLMs are extraordinarily useful life agents as is. Most people would benefit from using them.

It'd be like pretending it's either water or education (pick one). The answer is both and you don't have to pick one or the other in reality at all. The entities trying to solve each aspect are typically different organization anyway.

sysguest•17m ago
"Most people would benefit from using them"

hmm maybe that "would benefit" is a bit too vague?

amelius•19m ago
"You told me I could find water in the well 20km North, but there wasn't any."

"Ah, you're absolutely right! Have you tried looking in the shop?"

wkat4242•13m ago
Yeah AI will put a lot of people out of a job, it will bring people into poverty not out.
throwmeaway222•28m ago
I don't really think people understand there are all sorts of non-chatgpt users that pay OpenAI thousands of dollars PER DAY - (>100k customers like this). They're not going to publish the data, but agentic flows make ChatGPT look like a cereal box.
bdbdkdksk•27m ago
Individuals who personally spent hundreds of thousands of dollars a year running agents? I would love to see one example.
throwmeaway222•26m ago
orgs
teaearlgraycold•20m ago
I find myself using it less over time. It’s still useful but once you’ve been using it for a while you get to know best when not to use it.
lawlessone•15m ago
>100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time

I'm not sure i understand the reasoning. lots of people use a thing, so everyone should?

helsinkiandrew•13m ago
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.

For OpenAI I think the problem is that if eventually browsers, operating systems, phones, word processors [some other system people already use and/or pay for] integrate some form of generative AI that is good enough - and an integrated AI can be a lot less capable than the cutting edge to win, what will be the market for a stand alone AI for the general public.

There will always be a market for professional products, cutting edge research and coding tools, but I don’t think that makes a trillion dollar company.

emp17344•6m ago
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.

This doesn’t make any sense. Popular is not the same as useful. You’d have a more compelling argument if you included data showing that all this increased LLM usage has had some kind of impact on productivity metrics.

Instead, some studies have shown that LLMs are making professionals less productive:

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

cenamus•3m ago
Exactly, for how many people is Instagram/TikTok and friends actually useful? Sure, they're popular and also used by billions, but would every human on earth benefit from using those services?
overflyer•1h ago
Elon can give them 400B and he would still have a net worth of 100B and from that he can again get back to his former net worth in a few years. The world is not fair.
misiti3780•48m ago
what does this mean?
thevillagechief•46m ago
That's not how that works. Same reason we don't tax unrealized gains (unless you are Norway).
marijnz0r•41m ago
And The Netherlands
KeplerBoy•42m ago
You do realize that a large part of his wealth is tied to the valuation of Tesla (and SpaceX and many other investments).

Selling 100B worth of stocks for anything close to 100B is not possible. That volume would mini-crash the entire exchange.

bdangubic•1h ago
now do TSLA
basisword•1h ago
Vast sums of money that are being invested in it (obviously in the hope of AGI) but I'm not sure the world would notice if OpenAI/current LLM products just disappeared tomorrow.
sho_hn•53m ago
Women on dating apps would.

Have a search for "chatfishing".

basisword•49m ago
I think the fix for that one is easy enough fortunately. Meet quickly. They can't 'chatfish' you in person.
nutjob2•47m ago
> Meet quickly.

Always a good dating strategy.

VBprogrammer•46m ago
I don't think that AGI is necessary for LLMs to be revolutionary. I personally use various AI products more than I use Google search these days. Google became the biggest company in the world based on selling advertising on its search engine.
nextworddev•44m ago
Why is this guy so angry?

That aside, his math is wrong

dwedge•39m ago
He's not angry it's an angsty way of writing, a lot of used to write like that as teenagers. There was a time around 5 years ago where ever best selling book raced to have "fuck" or "vagina" in the title.
shocks•29m ago
Us British have a unique relationship with profanity as a way to communicate.

edit: Aussies and kiwis too!

mrkeen•43m ago
I'm not sure if I missed this in the article, but what's the cost of failure?

Why can't OpenAI keep projecting/promising massive data centre growth year after year, fail to deliver, and keep making Number Go Up?

IT4MD•39m ago
>Why can't OpenAI keep projecting/promising massive data centre growth year after year, fail to deliver, and keep making Number Go Up?

Because eventually, Nvidia will run out of money, so the incestuous loop between Nvidia funding AI entities, who then use those funds to buy Nvidia chips, artificially propping up Nvidia's stock price, will eventually end and poof.

Competing forces are the market's insatiable need for growth every quarter, and other countries also chasing AIs and will not slow down if other countries, like the US, do slow down.

IsTom•11m ago
If they keep missing the hype will burn out eventually
rich_sasha•43m ago
It's going to perhaps sound nuts, but I'm beginning to wonder if America is actually a giant Ponzi scheme.

I've been thinking about American exceptionalism - they way it is head and shoulders above Europe and the developed world in terms of GDP growth, market returns, start up successes etc. and what might be the root of this success. And I'm starting to think that, apart from various mild genuine effects, it is also a sequence of circular self-fulfilling prophecies.

Let's say you're a sophisticated startup and you want some funding. Where do you go? US of course - it has the easiest access to capital. It does so presumably because US venture funds have an easier time raising funds. And that's presumably because of their track record of making money for investors - real, or at least perceived. They invest in these startups and they exit at a profit, because US companies have better valuations than elsewhere, so at IPO investors lap up the shares and the VCs make money. It's easy to find buyers for US stocks because they're always going up. In turn, they're going up because, well, there's lots of investors. It's much easier to raise billions for data centres and fairy dust because investors are in awe of what can be done with the money and anyway line always go up. Stocks like TSLA have valuations you couldn't justify elsewhere. Maybe because they will build robot AI rocket taxis, or maybe because the collective American Allure means valuations are just high.

The beauty of this arrangement is that the elements are entangled in a complex web of financial interdependency. If you think about these things in isolation, you wouldn't conclude there's anything unusual. US VC funding is so good because there's a lot of capital - lucky them. This thought of circularity only struck me when trying to think of the root cause - the nuclear set of elements that drive it. And I concluded any reason I can think of is eventually recursive.

I'm not saying America is just dumb luck kept together by spittle, of course there are structural advantages the US has. I'm just not sure it really is that much better an economic machine than other similar countries.

One difference to a Ponzi scheme is that you might actually hit a stable level and stay there rather than crash and burn. So it's more like a collective investment into a lottery. OpenAI might burn $400bn and achieve singularity, then proceed to own the rest of the world.

But I can't shake the feeling that a lot of recent US growth is a bit of smoke and mirrors. After adjusting for tech, US indices didn't outperform European ones post GFC, IIRC. Much of its growth this year is AI, financed presumably by half the world and maintained by sky-high valuations. And no one says "check" because, well, it's the US and the line always go up.

ctoth•42m ago
His "$400B in next 12 months" claim treats OpenAI as paying construction costs upfront. But OpenAI is leasing capacity as operating expense - Oracle finances and builds the data centers [1]. This is like saying a tenant needs $5M cash because that's what the building cost to construct.

The Oracle deal structure: OpenAI pays ~$30B/year in rental fees starting fiscal 2027/2028 [2], ramping up over 5 years as capacity comes online. Not "$400B in 12 months."

The deals are structured as staged vendor financing: - NVIDIA "invests" $10B per gigawatt milestone, gets paid back through chip purchases [3] - AMD gives OpenAI warrants for 160M shares (~10% equity) that vest as chips deploy [4] - As one analyst noted: "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia" [3]

This is circular vendor financing where suppliers extend credit betting on OpenAI's growth. It's unusual and potentially fragile, but it's not "OpenAI needs $400B cash they don't have."

Zitron asks: "Does OpenAI have $400B in cash?"

The actual question: "Can OpenAI grow revenue from $13B to $60B+ to cover lease payments by 2028-2029?"

The first question is nonsensical given deal structure. The second is the actual bet everyone's making.

His core thesis - "OpenAI literally cannot afford these deals therefore fraud" - fails because he fundamentally misunderstands how the deals work. The real questions are about execution timelines and revenue growth projections, not about OpenAI needing hundreds of billions in cash right now.

There's probably a good critical piece to write about whether these vendor financing bets will pay off, but this isn't it.

[1] https://www.cnbc.com/2025/09/23/openai-first-data-center-in-...

[2] https://w.media/openai-to-rent-4-5-gw-of-data-center-power-f...

[3] https://www.cnbc.com/2025/09/22/nvidia-openai-data-center.ht...

[4] https://techcrunch.com/2025/10/06/amd-to-supply-6gw-of-compu...

thelastgallon•34m ago
> His "$400B in next 12 months" claim treats OpenAI as paying construction costs upfront. But OpenAI is leasing capacity as operating expense - Oracle finances and builds the data centers [1].

It is bagholders all the way down[1]! The final bagholder will be the taxpayer/pension holder.

[1]https://en.wikipedia.org/wiki/Turtles_all_the_way_down

thewebguyd•18m ago
It's going to be 2008 bailouts again, but much worse.

These companies are doing all sorts of round tripping on top of propping up the economy on a foundation of fake revenue on purpose so that when it does some crumbling down they can go cry to the feds "help! we are far too big to fail, the fate of the nation depends on us getting bailed out at taxpayer expense."

dcre•32m ago
Suffice it to say this is not the first time Ed Zitron has been egregiously wrong on both analysis and basic facts. It's not even the first time this week.

I wrote a post about his insistence that the "cost of inference" is going up. https://crespo.business/posts/cost-of-inference/

onlyrealcuzzo•23m ago
OpenAI is currently growing WAUs at ~122.8% annualized growth (down from ~461.8% just 10 months ago).

Assuming their growth rate is getting close to stabilizing and will be at ~100% for 3 years to end of 2028 - that'd be $104B in revenue, on 6.4B WAUs.

I wouldn't bank on either of those numbers - but Oracle and Nvidia kind of need to bank on it to keep their stocks pumped.

Their growth decay is around 20% every 2 months - meaning - by this time next year, they could be closer to 1.2B WAUs than to 1.6B WAUs, and the following year they could be closer to 1.4B WAUs than to 3.2B WAUs.

Impressive, for sure, but still well bellow Google and Facebook, revenue much lower and growth probably even.

ibejoeb•39m ago
Let's assume that estimate is good. For some perspective an context, the last finalized DOD budget (2023) was $815B, and, plus supplementals, turned into about $852 billion.

AGI is absolutely a national security concern. Despite it being an enormous number, it'll happen. It may not be earmarked for OpenAI, but the US is going to ensure that the energy capability is there.

rchaud•32m ago
> AGI is absolutely a national security concern.

This may well be the PR pivot that's to come once it becomes clear that taxpayer funding is needed to plug any financing shortfalls for the industry - it's "too big to let fail". It won't all go to OpenAI, but be distributed across a consortium of other politically connected corps: Oracle, Nvidia/Intel, Microsoft, Meta and whoever else.

adventured•24m ago
The top six US tech companies are generating ~$620 billion per year in operating income (likely to be closer to $700 billion in another 12-18 months). They can afford to spend $2 trillion on this over the next decade without missing a beat. Their profit over that timeline will plausibly be $8 to $10 trillion (and of course something dramatic could change that). That's just six companies.
rchaud•5m ago
Fears of an AI bubble originate from the use of external financing needed to pay for infrastructure investments, which may or may not pay off for the lenders.

These 6 companies are using only a small portion of their own cash reserves to invest, and using private credit for the rest. Meta is getting a $30 billion loan from PIMCO and Blue Owl for a datacenter [0], which they could easily pay for out of their own pocket. There are also many datacenters that are being funded through asset-backed securities or commercial mortgage-backed securities [0], the market for which can quickly collapse if expected income doesn't materialize, leading to mortgage defaults, as in 2008.

[0] https://www.reuters.com/legal/transactional/meta-set-clinch-...

[1] https://www.etftrends.com/etf-strategist-channel/securitizin...

ChrisArchitect•37m ago
Related:

They Don't Have the Money: OpenAI Edition

https://news.ycombinator.com/item?id=45545236

Havoc•34m ago
They sure are writing cheques fast. Presumable Sam has a plan
r33b33•34m ago
Sorry, best I can do is $20
Group_B•33m ago
At this point we all know this is just a massive bubble. I'm done paying attention to it really. I'm prepared for all my investments to go down in the next 1-5 years. If you're nearing retirement now is the time to cash out. Yes, investments could go up in a value a lot until the correction, but I don't really think that is worth the risk.
saulpw•23m ago
So cash out, and then what? Buy gold? Hang onto your cash while inflation takes off and dilutes it to nothing?
quux•33m ago
Can someone explain why we measure these datacenters in Gigawatts rather than something that actually measures compute like flops or whatever the AI equivalent of flops is?

To put it another way, I don't know anything but I could probably make a '1 GW' datacenter with a single 6502 and a giant bank of resistors.

refulgentis•30m ago
Because, to us tech nerds, GPUs are the core thing. With a PM hat on, it's the datacenter in toto. Put another way: how can we measure in flops? By the time all this is built out we're on the next gen of cards.
omgJustTest•28m ago
Measurement in unit of power because this is the ultimate use-cost, assuming scaling in compute efficiencies, capex costs, etc.
dcre•28m ago
My understanding is that there is no universal measure of compute power that applies across different hardware and workloads. You can interpret the power number to mean something close to the maximum amount of compute you can get for that power at a given time (or at least at time of install). It also works across geographies, cooling methods, etc. It covers all that.
morkalork•26m ago
Assuming a datacenter is more or less filled with $current_year chips, the number of of flops is kind of a meaninglessly large number. It's big. How big? Big enough it needs a nuclear power plant to run.
martinald•25m ago
Because that's the main constraint for building them - how much power can you get to the site, and the cooling involved.

Also the workloads completely change over time as racks get retired and replaced, so it doesn't mean much.

But you can basically assume with GB200s right now 1GW is ~5exaflops of compute depending on precision type and my maths being correct!

jauntywundrkind•6m ago
Yes! The varying precisions and maths feels like just the start!

Look at next gen Rubin with it's CPX co-processor chip to see things getting much weirder & more specialized. There for prefilling long contexts, which is compute intensive:

> Something has to give, and that something in the Nvidia product line is now called the "Rubin" CPX GPU accelerator, which is aimed specifically at parts of the inference workload that do not require high bandwidth memory but do need lots of compute and, increasingly, the ability to process video formats for both input and output as part of the AI workflow.

https://www.nextplatform.com/2025/09/11/nvidia-disaggregates...

To confirm what you are saying, there is no coherent unifying way to measure what's getting built other than by power consumption. Some of that budget will go to memory, some to compute (some to interconnect, some to storage), and it's too early to say what ratio each may have, to even know what ratios of compute:memory we're heading towards (and one size won't fit all problems).

Perhaps we end up abandoning HBM & dram! Maybe the future belongs to high bandwidth flash! Maybe with it's own Computational Storage! Trying to use figures like flops or bandwidth is applying today's answers to a future that might get weirder on us. https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...

pseudosavant•17m ago
If you think about it like refining electricity. A data center has a supply of raw electricity, and a capacity for how must waste (heat) it can handle. The quality of the refining improving over time doesn't change the supply or waste capacity of the facility.
stray•6m ago
Back of the napkin: 1 gigawatt would power roughly 1.43 billion 6502s.
quux•2m ago
I appreciate you
tetha•2m ago
Mh, in my recently slightly growing, but still tiny experience with HW&DC-Ops:

You have a lot more things in a DC than just GPUs consuming power and producing heat. GPUs are the big ones, sure, but after a while, switches, firewalls, storage units, other servers and so one all contribute to the power footprint significantly. A big small packet high throughput firewall packs a surprisingly amount of compute capacity, eats a surprising amount of power and generates a lot of heat.

And that's the important abstraction / simplification you get when you start running hardware at scale. Your limitation is not necessarily TFlops, GHz or GB per cubic meter. It is easy to cram a crapton of those into a small place.

The main problem after a while is the ability to put enough power into the building and to move the heat out of it again. It sure would be easy to put a lot of resistors into a place to make a lot of power consumption. Hamburg Energy is currently building just that to bleed off excess solar power into the grid heating.

It's problematic to connect that to the 10kv power grid safely and to move the heat away from the system fast.

locallost•32m ago
My search habits have evolved quite fast - when I search for something now I first ask for quick results from Chatgpt, which gives me pointers I then drill down on. Google's revenue for 2024 was 350 billion. I know it's not all Google ads, but s lot of it is. When you follow a link from Chatgpt, it always has a utm_source=Chatgpt in it, so companies are quickly learning how important getting linked there is.

I'm not saying there's no bubble, and I personally anticipate a lot of turmoil in the next year, but monetisation of that would be the most primitive way of earning a lot of money. If anyone is dead man walking it's Google. For better or worse, Chatgpt has become to AI what Google was to search, even though I think Gemini is also good or even better. I also have my own doubts about the value of LLMs because I've already experienced a lot of caveats with the stuff it gives you. But at the same time, as long as you don't believe it blindly, getting started with something new has never been easier. If you don't see value in that, I don't know what to tell you.

thewebguyd•8m ago
> For better or worse, Chatgpt has become to AI what Google was to search, even though I think Gemini is also good or even better.

Google definitely has the better model right now, but I think ChatGPT is already well on its way to becoming to AI what Google was to search.

ChatGPT is a household name at this point. Any non tech person I ask or talk about AI with it's default to be assumed it's ChatGPT. "ChatGPT" has become synonymous with "AI" for the average population, much in the same way "Google it" meant to perform an internet search.

So ChatGPT already has the popular brand. I think people are sleeping on Google though. They have a hardware advantage and aren't reliant on Nvidia, and have way more experience than OpenAI in building out compute and with ML, Google has been an "AI Company" since forever. Google's problem if they lose won't be because of tech or an inferior model, it will be because they absolutely suck at making products. What Google puts out always feels like a research project made public because someone inside thought it was cool enough to share. There's not a whole lot of product strategy or cohesion across the Google ecosystem.

haunter•24m ago
I'd be happy with $4000
refulgentis•22m ago
If you're reading this article and wondering "When is this house of cards going to collapse!?", a little advice, gained at a high price to myself: you can waste years waiting for it to collapse, 95% of the time, it never will. I never thought Uber or Tesla would survive COVID. I'd have $450K in bitcoin if I held onto the "joke" amount I bought in 2013.

Thing that make me skip this specific narrative:

- There's some heavy-handed reaching to get to $400B next 12 months: guesstimate $50B = 1 GW of capacity, then list out 3.3 gigawatts across Broadcom chip purchases, Nvidia, and AMD

- OpenAI is far better positioned than any of the obvious failures I foresaw in my 37 years on this rock. It's very, very, hard to fuck up to the point you go out of business.

- Ed is repeating narratives instead of facts ("what did they spend that money on!? GPT-5 was a big let down!" -- i.e. he remembers the chatgpt.com router discourse, and missed that it was the first OpenAI release that could get the $30-50/day/engineer in spend we've been sending to Anthropic)

uchibeke•6m ago
What if they can't get it? What happens to companies that are built on their models like this Meeting Prep AI I just launched today https://news.ycombinator.com/item?id=45617686
retrocog•5m ago
It like dumping all your cash into mainframes right before the PC revolution.
lumost•4m ago
How much does the capex model of a datacenter change when the goal is 100% utilization, with no care for node uptime beyond capex efficiency/hardware value mainenance?

I wouldn't be surprised if the cost came down by at least one order of magnitude, two if NVidia and others adjust their margin expectations. If the bet is that OpenAI can ship crappy datacenters with crappy connectivity/latency characteristics in places with cheap/existing power - then that seems at least somewhat plausible.

OpenAI burning 40 billion dollars on datacenters in the next 1 year is almost guaranteed. Modern datacenter facilities are carefully engineered for uptime, I don't think OpenAI cares about rack uptime or even facility uptime at this scale.