frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic acquires Bun

https://bun.com/blog/bun-joins-anthropic
438•ryanvogel•1h ago•179 comments

100k TPS over a billion rows: the unreasonable effectiveness of SQLite

https://andersmurphy.com/2025/12/02/100000-tps-over-a-billion-rows-the-unreasonable-effectiveness...
91•speckx•1h ago•16 comments

I designed and printed a custom nose guard to help my dog with DLE

https://snoutcover.com/billie-story
178•ragswag•2d ago•24 comments

Learning music with Strudel

https://terryds.notion.site/Learning-Music-with-Strudel-2ac98431b24180deb890cc7de667ea92
268•terryds•6d ago•64 comments

Claude 4.5 Opus' Soul Document

https://simonwillison.net/2025/Dec/2/claude-soul-document/
6•the-needful•3m ago•1 comments

Mistral 3 family of models released

https://mistral.ai/news/mistral-3
472•pember•4h ago•149 comments

Amazon launches Trainium3

https://techcrunch.com/2025/12/02/amazon-releases-an-impressive-new-ai-chip-and-teases-a-nvidia-f...
4•thnaks•4m ago•0 comments

4.3M Browsers Infected: Inside ShadyPanda's 7-Year Malware Campaign

https://www.koi.ai/blog/4-million-browsers-infected-inside-shadypanda-7-year-malware-campaign
40•janpio•2h ago•7 comments

Zig's new plan for asynchronous programs

https://lwn.net/SubscriberLink/1046084/4c048ee008e1c70e/
99•messe•4h ago•83 comments

Poka Labs (YC S24) Is Hiring a Founding Engineer

https://www.ycombinator.com/companies/poka-labs/jobs/RCQgmqB-founding-engineer
1•arbass•2h ago

Nixtml: Static website and blog generator written in Nix

https://github.com/arnarg/nixtml
66•todsacerdoti•4h ago•20 comments

YesNotice

https://infinitedigits.co/docs/software/yesnotice/
98•surprisetalk•1w ago•43 comments

Addressing the adding situation

https://xania.org/202512/02-adding-integers
227•messe•7h ago•70 comments

Advent of Compiler Optimisations 2025

https://xania.org/202511/advent-of-compiler-optimisation
287•vismit2000•9h ago•47 comments

Python Data Science Handbook

https://jakevdp.github.io/PythonDataScienceHandbook/
144•cl3misch•6h ago•29 comments

IBM CEO says there is 'no way' spending on AI data centers will pay off

https://www.businessinsider.com/ibm-ceo-big-tech-ai-capex-data-center-spending-2025-12
57•nabla9•58m ago•45 comments

OpenAI declares 'code red' as Google catches up in AI race

https://www.theverge.com/news/836212/openai-code-red-chatgpt
74•goplayoutside•4h ago•90 comments

Lowtype: Elegant Types in Ruby

https://codeberg.org/Iow/type
32•birdculture•4d ago•9 comments

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

https://github.com/marmotdata/marmot
70•charlie-haley•4h ago•16 comments

Apple Releases Open Weights Video Model

https://starflow-v.github.io
389•vessenes•13h ago•129 comments

School Cell Phone Bans and Student Achievement (NBER Digest)

https://www.nber.org/digest/202512/school-cell-phone-bans-and-student-achievement
11•harias•1h ago•7 comments

A series of vignettes from my childhood and early career

https://www.jasonscheirer.com/weblog/vignettes/
115•absqueued•6h ago•79 comments

What will enter the public domain in 2026?

https://publicdomainreview.org/features/entering-the-public-domain/2026/
439•herbertl•15h ago•296 comments

Anthropic Acquires Bun

https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone
41•httpteapot•1h ago•10 comments

YouTube increases FreeBASIC performance (2019)

https://freebasic.net/forum/viewtopic.php?t=27927
139•giancarlostoro•2d ago•34 comments

Proximity to coworkers increases long-run development, lowers short-term output (2023)

https://pallais.scholars.harvard.edu/publications/power-proximity-coworkers-training-tomorrow-or-...
144•delichon•5h ago•104 comments

Apple to beat Samsung in smartphone shipments for first time in 14 years

https://sherwood.news/tech/apple-to-beat-samsung-in-smartphone-shipments-for-first-time-in-14-years/
39•avonmach•1h ago•35 comments

Comparing AWS Lambda ARM64 vs. x86_64 Performance Across Runtimes in Late 2025

https://chrisebert.net/comparing-aws-lambda-arm64-vs-x86_64-performance-across-multiple-runtimes-...
110•hasanhaja•9h ago•48 comments

Progress on TypeScript 7 – December 2025

https://devblogs.microsoft.com/typescript/progress-on-typescript-7-december-2025/
40•DanRosenwasser•1h ago•12 comments

Beej's Guide to Learning Computer Science

https://beej.us/guide/bglcs/
310•amruthreddi•2d ago•119 comments
Open in hackernews

IBM CEO says there is 'no way' spending on AI data centers will pay off

https://www.businessinsider.com/ibm-ceo-big-tech-ai-capex-data-center-spending-2025-12
56•nabla9•58m ago

Comments

verdverm•52m ago
IBM CEO is steering a broken ship and it's not improved course, not someone who's words you should take seriously.

1. The missed the AI wave (hired me to teach watson law only to lay me off 5 wks later, one cause of the serious talent issues over there)

2. They bought most of their data center (companies), they have no idea about building and operating one, not at the scale the "competitors" are operating at

nabla9•39m ago
Everyone should read his argument carefully. Ponder them in silence and accept or reject them in based on the strength of the arguments.
nyc_data_geek1•24m ago
IBM can be a hot mess, and the CEO may not be wrong about this. These things are not mutually exclusive.
scarmig•21m ago
His argument follows almost directly, and trivially, from his central premise: a 0% or 1% chance of reaching AGI.

Yeah, if you assume technology will stagnate over the next decade and AGI is essentially impossible, these investments will not be profitable. Sam Altman himself wouldn't dispute that. But it's a controversial premise, and one that there's no particular reason to think that the... CEO of IBM would have any insight into.

skeeter2020•7m ago
then it seems like neither Sam Altman (pro) or IBM (proxy con) have credible or even really interesting or insightful evidence, theories ... even suggestions for what's likely to happen? i.e. We should stop listening to all of them?
malux85•38m ago
Sorry that happened to you, I have been there too,

When a company is hiring and laying off like that it’s a serious red flag, the one that did that to me is dead now

duxup•37m ago
Is his math wrong?
observationist•23m ago
IBM CEO has sour grapes.

IBM's HPC products were enterprise oriented slop products banked on their reputation, and the ROI torched their credibility when compute costs started getting taken seriously. Watson and other products got smeared into kafkaesque arbitrary branding for other product suites, and they were nearly all painful garbage - mobile device management standing out as a particularly grotesque system to use. Now, IBM lacks any legitimate competitive edge in any of the bajillion markets they tried to target, no credibility in any of their former flagship domains, and nearly every one of their products is hot garbage that costs too much, often by orders of magnitude, compared to similar functionality you can get from things like open source or even free software offered and serviced by other companies. They blew a ton of money on HPC before there was any legitimate reason to do so. Watson on Jeopardy was probably the last legitimately impressive thing they did, and all of their tech and expertise has been outclassed since.

kenjackson•32m ago
I don't understand the math about how we compute $80b for a gigawatt datacenter. What's the costs in that $80b? I literally don't understand how to get to that number -- I'm not questioning its validity. What percent is power consumption, versus land cost, versus building and infrastructure, versus GPU, versus people, etc...
wmf•27m ago
https://www.investing.com/news/stock-market-news/how-much-do...
georgeecollins•24m ago
First, I think it's $80b per 100 GW datacenter. The way you figure that out is a GPU costs $x and consumes y power. The $x is pretty well known, for example an H100 costs $25-30k and uses 350-700 watts (that's from Gemini and I didn't check my work). You add an infrastructure (i) cost to the GPU cost, but that should be pretty small, like 10% or less.

So a 1 gigawatt data center uses n chips, where yn = 1 GW. It costs = xi*n.

I am not an expert so correct me please!

wmf•30m ago
$8T may be too big of an estimate. Sure you can take OpenAI's $1.4T and multiply it by N but the other labs do not spend as much as OpenAI.
bluGill•28m ago
This is likely correct overall, but it can still pay off in specific cases. However those are not blind investments they are targeted with a planned business model
bluGill•26m ago
I question depreciation. those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them is an open question. cpu's stopped getting exponetially faster 20 years ago, (they are faster but not the jumps the 1990s got)
rlpb•23m ago
> those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them is an open question

Doesn't one follow from the other? If newer GPUs aren't worth an upgrade, then surely the old ones aren't obsolete by definition.

carlCarlCarlCar•12m ago
MTBF for data center hardware is short; DCs breeze through GPUs compared to even the hardest of hardcore gamers.

And there is the whole FOMO effect to business purchases; decision makers will worry their models won't be as fast.

Obsolete doesn't mean the reductive notion you have in mind, where theoretically it can still push pixels. Physics will burn them up, and "line go up" will drive demand to replace them.

lo_zamoyski•21m ago
> those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them

Then they won't be obsolete.

Negitivefrags•16m ago
I recently compared performance per dollar for CPUs and GPUs on benchmarks for GPUs today vs 10 years ago, and suprisingly, CPUs had much bigger gains. Until I saw that for myself, I thought exactly the same thing as you.

It seems shocking given that all the hype is around GPUs.

This probably wouldn't be true for AI specific workloads because one of the other things that happened there in the last 10 years was optimising specifically for math with lower size floats.

maxglute•13m ago
I think real issue is current costs / demand = Nvidia gouging GPU price that costs for hardware:power consumption is 70:20 instead of 50:40 (10 for rest of datacenter). Reality is gpus are serendipidous path dependent locked from gaming -> mining. TPUs are more power efficient, if bubble pops and demand for compute goes down, Nvidia + TMSC will still be around, but nexgen AI first bespoke hardware premium will revert towards mean and we're looking at 50% less expensive hardware (no AI race scarcity tax, i.e. 75% Nvidia margins) that use 20% less power / opex. All of a sudden existing data centers becomes not profitable stranded assets even if they can be stretched past 5 years.
qwertyuiop_•26m ago
The question no one seems to be answering is what would be the EOL for these newer GPUs that are being churned out of NVDIA ? What % annual capital expenditures is refresh of GPUs. Will they be perpetually replaced as NVIDIA comes up with newer architectures and the AI companies chase the proverbial lure ?
parapatelsukh•23m ago
The spending will be more than paid off since the taxpayer is the lender of last resort There's too many funny names in the investors / creditors a lot of mountains in germany and similar ya know
myaccountonhn•22m ago
> In an October letter to the White House's Office of Science and Technology Policy, OpenAI CEO Sam Altman recommended that the US add 100 gigawatts in energy capacity every year.

> Krishna also referenced the depreciation of the AI chips inside data centers as another factor: "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said.

And people think the climate concerns of AI are overblown. Currently US has ~1300 GW of energy capacity. That's a huge increase each year.

Octoth0rpe•22m ago
> Krishna also referenced the depreciation of the AI chips inside data centers as another factor: "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said

This doesn't seem correct to me, or at least is built on several shaky assumptions. One would have to 'refill' your hardware if:

- AI accelerator cards all start dying around the 5 year mark, which is possible given the heat density/cooling needs, but doesn't seem all that likely.

- Technology advances such that only the absolute newest cards can be used to run _any_ model profitably, which only seems likely if we see some pretty radical advances in efficiency. Otherwise, it seems like assuming your hardware is stable after 5 years of burn in, you could continue to run older models on that hardware at only the cost of the floorspace/power. Maybe you need new cards for new models for some reason (maybe a new fp format that only new cards support? some magic amount of ram? etc), but it seems like there may be room for revenue via older/less capable models at a discounted rate.

mcculley•18m ago
But if your competitor is running newer chips that consume less power per operation, aren't you forced to upgrade as well and dispose of the old hardware?
Octoth0rpe•12m ago
Sure, assuming the power cost reduction or capability increase justifies the expenditure. It's not clear that that will be the case. That's one of the shaky assumptions I'm referring to. It may be that the 2030 nvidia accelerators will save you $2000 in electricity per month per rack, and you can upgrade the whole rack for the low, low price of $800,000! That may not be worth it at all. If it saves you $200k/per rack or unlocks some additional capability that a 2025 accelerator is incapable of and customers are willing to pay for, then that's a different story. There are a ton of assumptions in these scenarios, and his logic doesn't seem to justify the confidence level.
abraae•15m ago
It's just the same dynamic as old servers. They still work fine but power costs make them uneconomical compared to latest tech.
acdha•8m ago
It’s far more extreme: old servers are still okay on I/O, and memory latency, etc. won’t change that dramatically so you can still find productive uses for them. AI workloads are hyper-focused on a single type of work and, unlike most regular servers, a limiting factor in direct competition with other companies.
zppln•2m ago
I'm a little bit curious about this. Where do all the hardware from the big tech giants usually go once they've upgraded?
dmoy•13m ago
5 years is maybe referring to the accounting schedule for depreciation on computer hardware, not the actual useful lifetime of the hardware.

It's a little weird to phrase it like that though because you're right it doesn't mean you have to throw it out. Idk if this is some reflection of how IBM handles finance stuff or what. Certainly not all companies throw out hardware the minute they can't claim depreciation on it. But I don't know the numbers.

Anyways, 5 years is an infection point on numbers. Before 5 years you get depreciation to offset some cost of running. After 5 years, you do not, so the math does change.

skeeter2020•10m ago
that is how the investments are costed though, so makes sense when we're talking return on investment, so you can compare with alternatives under the same evaluation criteria.
austin-cheney•10m ago
It’s not about assumptions on the hardware. It’s about the current demands for computation and expected growth of business needs. Since we have a couple years to measure against it should be extremely straightforward to predict. As such I have no reason to doubt the stated projections.
maxglute•19m ago
How long can ai gpus stretch? Optmistic 10 years and we're still looking at 400b+ profit to cover interests. The factor in silicon is closer to tulips than rail or fiber in terms of depreciated assets.
criddell•18m ago
> But AGI will require "more technologies than the current LLM path," Krisha said. He proposed fusing hard knowledge with LLMs as a possible future path.

And then what? These always read a little like the underpants gnomes business model (1. Collect underpants, 2. ???, 3. Profit). It seems to me that the AGI business models require one company has exclusive access to an AGI model. The reality is that it will likely spread rapidly and broadly.

If AGI is everywhere, what's step 2? It seems like everything AGI generated will have a value of near zero.

irilesscent•6m ago
AGI has value in automation and optimisation which increase profit margins.When AGI is everywhere, then the game is who has the smartest agi, who can offer it cheapest, who can specialise it for my niche etc. Also in this context agi need to run somewhere and IBM stands to benefit from running other peoples models.
scroot•17m ago
As an elder millennial, I just don't know what to say. That a once in a generation allocation of capital should go towards...whatever this all will be, is certainly tragic given current state of the world and its problems. Can't help but see it as the latest in a lifelong series of baffling high stakes decisions of dubious social benefit that have necessarily global consequences.
PrairieFire•10m ago
agree the capital could be put to better use, however I believe the alternative is this capital wouldn't have otherwise been put to work in ways that allow it to leak to the populace at large. for some of the big investors in AI infrastructure, this is cash that was previously and likely would have otherwise been put toward stock buybacks. for many of the big investors pumping cash in, these are funds deploying the wealth of the mega rich, that again, otherwise would have been deployed in other ways that wouldn't leach down to the many that are yielding it via this AI infrastructure boom (datacenter materials, land acquisition, energy infrastructure, building trades, etc, etc)
amanaplanacanal•5m ago
It could have, though. Higher taxes on the rich, spend it on social programs.
ayaros•9m ago
I'm a younger millennial. I'm always seeing homeless people in my city and it's an issue that I think about on a daily basis. Couldn't we have spent the money on homeless shelters and food and other things? So many people are in poverty, they can't afford basic necessities. The world is shitty.

Yes, I know it's all capital from VC firms and investment firms and other private sources, but it's still capital. It should be spent on meeting people's basic human needs, not GPU power.

Yeah, the world is shitty, and resources aren't allocated ideally. Must it be so?

ic_fly2•17m ago
IBM might not have a data strategy or AI plan but he isn’t wrong on the inability to generate a profit.

A bit of napkin math: NVIDIA claims 0.4J per token for their latest generation 1GW plant with 80% utilisation can therefore produce 6.29 10^16 tokens a year.

There are ~10^14 tokens on the internet. ~10^19 tokens have been spoken by humans… so far.

lostmsu•13m ago
> ~10^14 tokens on the internet

Does that include image tokens? My bet is with image tokens you are off by at least 5 orders of magnitude for both.

senordevnyc•8m ago
I must be dense, why does this imply AI can't be profitable?
skeeter2020•11m ago
The interesting macro view on what's happening is to compare a mature data center operation (specifically a commoditized one) with the utility business. The margins here, and in similar industries with big infra build-out costs (ex: rail) are quite small. Historically the businesses have not done well; I can't really imagine what happens when tech companies who've only ever known huge, juicy margins experience low single digit returns on billions of investment.
eitally•3m ago
At some point, I wonder if any of the big guys have considered becoming grid operators. The vision Google had for community fiber (Google Fiber, which mostly fizzled out due to regulatory hurdles) could be somewhat paralleled with the idea of operating a regional electrical grid.
devmor•3m ago
I suppose it depends on your definition of "pay off".

It will pay off for the people investing in it, when the US government inevitably bails them out. There is a reason Zuckerberg, Huang, etc are so keen on attending White House dinners.

It certainly wont pay off for the American public.