frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
192•theblazehen•2d ago•55 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
678•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
953•xnx•20h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
125•matheusalmeida•2d ago•33 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
25•kaonwarb•3d ago•20 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
61•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
234•isitcontent•15h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
226•dmpetrov•15h ago•121 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
38•jesperordrup•5h ago•17 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•17h ago•145 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
499•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
384•ostacke•21h ago•96 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•183 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
21•speckx•3d ago•10 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
291•eljojo•17h ago•181 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•21h ago•279 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
6•matt_d•3d ago•1 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
20•bikenaga•3d ago•10 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
66•kmm•5d ago•9 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
93•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
259•i5heu•17h ago•201 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
38•gmays•10h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1073•cdrnsf•1d ago•457 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
291•surprisetalk•3d ago•43 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•71 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
8•1vuio0pswjnm7•1h ago•0 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
154•SerCe•10h ago•144 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments
Open in hackernews

Nvidia is gearing up to sell servers instead of just GPUs and components

https://www.tomshardware.com/tech-industry/artificial-intelligence/jp-morgan-says-nvidia-is-gearing-up-to-sell-entire-ai-servers-instead-of-just-ai-gpus-and-componentry-jensens-master-plan-of-vertical-integration-will-boost-profits-purportedly-starting-with-vera-rubin
178•giuliomagnifico•2mo ago

Comments

re-thc•2mo ago
Soon Nvidia will sell AI itself instead of servers.
michaelbuckbee•2mo ago
Considering they have a pure service in selling Geforce Now (game streaming), that doesn't seem in any way far fetched.
Cthulhu_•2mo ago
To a point / by some definitions of the phrase AI they already do: https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling

I wouldn't be surprised if we see some major acquisitions or mergers happening in the next few years by one of the independent AI vendors like OpenAI and Nvidia.

Palomides•2mo ago
why? selling GPUs is way more profitable
reactordev•2mo ago
Why sell when you can rent?
Palomides•2mo ago
so you don't get stuck with many billions of dollars in useless GPUs and data centers when the bubble pops
re-thc•2mo ago
That's what Coreweave etc does and Nvidia already invests in them (i.e. has a stake).
giuliomagnifico•2mo ago
Sell a whole infrastructure is more profitable than sell components, and also put the customers in “sandboxes” with you.
hvenev•2mo ago
Don't they already sell servers? https://www.nvidia.com/en-us/data-center/dgx-platform/
p4ul•2mo ago
I had the same reaction. Haven't they been selling DGX boxes for almost 10 years now? And they've been selling the rack-scale NVL72 beast for probably a few years.[1]

What is changing?

[1] https://www.nvidia.com/en-us/data-center/gb200-nvl72/

AlanYx•2mo ago
When nVIDIA sells DGX directly they usually still partner with SuperMicro, etc. for deployment and support. It sounds like they're going to be offering those services in-house now, competing with their resellers on that front.
reactordev•2mo ago
Cutting out the Vendor like SuperMicro or HPE, they are going straight to consumer now.
rlupi•2mo ago
Hyperscalers and similar clients don't use DGX, but their own designs that integrate better with their custom designed datacenters

https://www.nvidia.com/en-us/data-center/products/mgx/

lvl155•2mo ago
We’re not far from Nvidia exclusively bundling ChatGPT. It’s a classic playbook from Microsoft.
MangoToupe•2mo ago
Why would Nvidia ever agree to that?
ptero•2mo ago
Chatgpt is not the only game in town. Any exclusivity deal will likely backfire against chatgpt.
mcintyre1994•2mo ago
I'm pretty sure they'd like to keep selling chips to all of OpenAI's competitors too.
gruturo•2mo ago
ChatGPT doesn't really have much of a moat. If it becomes Microsoft or Nvidia exclusive, it just opens an opportunity for its competitors. I barely notice which LLM I'm using unless it's something super specific where one is known to be stronger.
dmboyd•2mo ago
Aren’t they already supply constrained? Seems like this would be counterproductive in further limiting supply vs a strategy of commoditizing your complements. This seems closer to PR designed to boost share price rather than a cogent strategy.
MattRix•2mo ago
They’re only supply constrained on the chips themselves. Selling fully integrated racks allows them to get even more money per chip.
dboreham•2mo ago
In MBA-speak this is "capturing more of the value chain".
mikeryan•2mo ago
Huh. I view it the other way. If you’re supply constrained go straight to the consumer and capture the value that the middlemen building on top of your tech are currently profiting from.
energy123•2mo ago

  > Further limiting supply
Even if they don't increase their GPU production capacity, that's not "limiting" supply. It's keeping it the same. Only now they can sell each unit for a larger profit margin.
jpecar•2mo ago
Servers? I thought they left even racks behind, they're now selling these "AI factories".
czbond•2mo ago
Didn't they watch Silicon Valley to learn that lesson? Don't sell the box.
thesuperbigfrog•2mo ago
What software will those Nvidia servers run?

Are they creating their own software stack or working with one or more partners?

kj4ips•2mo ago
They have a Ubuntu derivative called DGX OS, that they use on their current lines.
overfeed•2mo ago
I wonder which [publicly listed] companies would look at the abandonment of Jetson and still commit to having Nvidia set the depreciation schedule for them.
nijave•2mo ago
They already do have a pretty robust software stack that goes all the way to code/analytics libraries. I'm not sure on the current state of things but ~2020 they were automatically testing chip designs for performance regressions in analytics libraries across the entire stack from hardware to each piece of software
kj4ips•2mo ago
It's my opinion that nvidia does good engineering at the nanometer scale, but it gets worse the larger it gets. They do a worse job at integrating the same aspeed BMC that (almost) everyone uses than SuperMicro does, and the version of Aptio they tend to ship has almost nothing available in setup. With the price of a DGX, I expect far better. (Insert obligatory bezel grumble here)
ecshafer•2mo ago
Nvidia already sells servers?

What I don't really get, is that Nvidia is worth like $4.5T on $130B revenue. If they want to sell servers, why don't they just buy Dell or HP? If they want CPUs want not buy AMD, Qualcomm, Broadcom or TI? (I know they got blocked on their ARM attempt before the AI boom) Their revenue is too low to support their value, shouldn't they use this massive value to just buy up companies to up their revenue?

jack_tripper•2mo ago
>why don't they just buy Dell or HP?

Why buy a complex but relatively low margin business that comes with a lot of baggage they don't need, when they can focus on what they do best and let Dell and HP compete against each other for Nvidia's benefit?

Same reason why Apple doesn't buy Foxconn or TSMC.

mr_toad•2mo ago
> If they want to sell servers, why don't they just buy Dell or HP?

They want to sell HPC servers, not general purpose servers.

_w1tm•2mo ago
Sometimes building a new organization is easier than trying to improve a legacy one.
btian•2mo ago
But nVidia already sells servers (NVL72), and CPUs (Grace). Why buy a bunch of overlapping companies?

And no sane regulator on the planet will allow them to takeover AMD, Qualcomm, or Broadcom.

cyanydeez•2mo ago
Lucky for them Americans have recently gone insane.
disqard•2mo ago
A $1m dinner at Mar-a-Lago will take care of that.

https://www.npr.org/2025/04/09/nx-s1-5356480/nvidia-china-ai...

thefourthchime•2mo ago
In a sense, they already do, since they're heavily invested in CoreWeave. For those unfamiliar, CoreWeave was a crypto company that pivoted to building out data centers.
zerosizedweasle•2mo ago
It's interesting to se the market try to do anything to rally. The problem is you guys are rallying on the thought that you've scared the Fed into cutting rates, but actually by rallying you short circuit it. You ensure they won't cut. And that's how the market's lillypad hopping thinking is actually just stupidity. You rallied, so now there are no rate cuts so the crash will be even more brutal.
wmf•2mo ago
GPU "neoclouds" are a different topic than whose logo is on the server.
2OEH8eoCRo0•2mo ago
Why would they chase a lower margin business area? Are they out of ideas?
nijave•2mo ago
More vertical integration
2OEH8eoCRo0•2mo ago
Like IBM?
wmf•2mo ago
Like IBM in the 1960s.
alecco•2mo ago
Guys, please read the article. Yes NVIDIA sells servers already. What they mean is they are going to also do other system parts that currenlty the partners are doing.

> Starting with the VR200 platform, Nvidia is reportedly preparing to take over production of fully built L10 compute trays with a pre-installed Vera CPU, Rubin GPUs, and a cooling system instead of allowing hyperscalers and ODM partners to build their own motherboards and cooling solutions. This would not be the first time the company has supplied its partners with a partially integrated server sub-assembly: it did so with its GB200 platform when it supplied the whole Bianca board with key components pre-installed. However, at the time, this could be considered as L7 – L8 integration, whereas now the company is reportedly considering going all the way to L10, selling the whole tray assembly — including accelerators, CPU, memory, NICs, power-delivery hardware, midplane interfaces, and liquid-cooling cold plates — as a pre-built, tested module.

JCM9•2mo ago
This would basically start to turn cloud providers into CoLo facilities that just host these servers.

Makes sense longer term for NVidia to build this but adds to the bear case for AWS et al long term on AI infrastructure.

nijave•2mo ago
I talked to someone at Nvidia ~2019 or ~2020 and their plan at the time was to completely vertically integrate and sell compute as a service via their own managed data centers with their own software, drivers, firmware, and hardware so this seems like just another incremental step in that direction.
SV_BubbleTime•2mo ago
In 2019 or 2020 that probably seemed reasonable.

Now? You would have to tell me nVidia was also building multiple nuclear power plants to get the scale to make sense.

Kye•2mo ago
They're invested in nuclear power. Pairing datacenters with small modular reactors is at least on the minds of all the AI companies.
pishpash•2mo ago
One day they will build AI out of radioactive source material directly and skip the reactor. Maybe.
noir_lord•2mo ago
That's one way to arrive at an IBM Mainframe like model I guess.

It'll work until you can buy comparable expansion cards for open systems (if history is any guide).

mrbungie•2mo ago
Yep, tech is incredibly circular. Once Nvidia gets there is highly probable that "disruptive" competition will apear due to mere desire/pressure for more freedom and options (and knowing NVDA, also costs).
cyanydeez•2mo ago
AMD is already knocking on the same door. If they had focused more on drivers it'd be an equal comparison.
michaelt•2mo ago
Maybe in 2019, but I find it hard to believe nvidia looks at google’s TPU business model enviously these days.
skybrian•2mo ago
Are TPU’s a bad business model?
AnthonyMouse•2mo ago
Customers are increasingly wary of building their business on someone else's land because they've seen it happen too many times that once you're locked in the price goes up, or the company who is now your only viable supplier decides to enter your own market.

And at least if anyone can buy the hardware you'll have your own or have multiple competing providers you can lease it from. If you can only lease it and only from one company, who would want to touch that? It's like purposely walking into a trap.

cyanydeez•2mo ago
This might be the source of the AI bubble burst, just like the 2000 bubble. Eventually someone's gonna raise the price to cover a bill and suddenly everyone looks at actual revenue and power bills , calculates within 6 months they'll not get their MBA turnip squeezed bonus and walk away
skybrian•2mo ago
Lock-in is a valid concern, but on the other hand, for many apps, it seems like this can be fairly easily mitigated? If you can swap in a different LLM, I don't think it matters if it's running on Google TPU's or NVidia?

Meanwhile, at the hardware level, TPU's provide some competition for NVidia.

AnthonyMouse•2mo ago
But then what's the case for making them Google-exclusive instead of selling the hardware to anyone who wants to buy one?
amelius•2mo ago
> vertically integrate

Sounds like they are going the Apple way. How long until we have to pay 30% to get our apps in their AI-Store?

DevKoala•2mo ago
We already pay a 100% premium on AWS.
amelius•2mo ago
At least that premium is not directly proportional to our revenue.
DevKoala•2mo ago
Good callout. However, nothing says Nvidia would adopt this margin tactic either.
diamond559•2mo ago
More like IBM
modeless•2mo ago
They're not stopping at servers. They want to sell datacenters.
heisenbit•2mo ago
Competing with your customers can be a risky strategy for a platform provider. If the platform abandons the neutral stance its customers will be a lot more open to alternatives.
m_ke•2mo ago
Soon OpenAI will make its own chips and Nvidia its own foundational models
wmf•2mo ago
I always wondered why a bunch of different companies make identical graphics cards then complain that it's a horrible business and Nvidia is screwing them. I wondered even more strongly when I saw a dozen flavors of the NVL72 rack. If the rack is so complex and difficult to manufacture, why have N companies do redundant work?
AnthonyMouse•2mo ago
Designing the board is a different business from designing the chip. You're negotiating with the DRAM fabs instead of the logic fabs, selling to a thousand retailers instead of a dozen integrators, etc. And once you've done all that, you do it more than once. ASRock isn't just making Nvidia GPUs, they're making AMD and Intel GPUs, motherboards, wireless routers, etc.

It's more efficient to have companies that specialize in making all kinds of boards than to make each of the companies making chips have to do that too. And it's a competitive market so the margins are low and the chip makers have little incentive to enter it when they can just have someone else do it for little money.

foruhar•2mo ago
This video shows the systems being built and shipped with cooling, cabling, etc.

It’s pretty mind blowing what this crisis shows from the manipulation of atoms and electrons all the way up to these clusters. Particularly mind blowing for me who has cable management issues with a ten port router.

https://youtu.be/1la6fMl7xNA?si=eWTVHeGThNgFKMVG

dzonga•2mo ago
what's mind blowing about the video you shared was the amount of coper cable used.

I thought with fiber we wouldn't need coper cables maybe just for electricity distribution but clearly I was wrong.

thanks for sharing

btown•2mo ago
“Nobody gets fired for choosing NVIDIA.”
alberth•2mo ago
How does their attempt to acquire ARM (and failed) impact this?
wmf•2mo ago
It doesn't.
idatum•2mo ago
Can anyone comment on wafer-scale systems, multiple equivalent chips on an entire wafer?

Seems like where things are heading?

wmf•2mo ago
Only Cerebras is doing wafer-scale. It seems to be working for them but no one is copying them. The minimum unit (one wafer) costs millions and it's not clear how good their multi-wafer scaling is.
matthews3•2mo ago
A 300mm wafer on a recent process node (TSMC N3) is estimated to be around $20k at quantity[1]. I don't know what kind of testing and crazy packaging processes would cost for a wafer-scale chip, but I can't imagine it would put the price anywhere near the millions.

[1]: https://www.tomshardware.com/tech-industry/tsmcs-wafer-prici...