frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I got an Nvidia GH200 server for €7.5k on Reddit and converted it to a desktop

https://dnhkng.github.io/posts/hopper/
139•dnhkng•3h ago

Comments

dnhkng•3h ago
This is the story of how I bought enterprise-grade AI hardware designed for liquid-cooled server racks that was converted to air cooling, and then back again, survived multiple near-disasters (including GPUs reporting temperatures of 16 million degrees), and ended up with a desktop that can run 235B parameter models at home. It’s a tale of questionable decisions, creative problem-solving, and what happens when you try to turn datacenter equipment into a daily driver.
ipsum2•1h ago
I saw the same post on Reddit and was so tempted to purchase it, but I live in the US. Cool to see it wasn't a scam!
amirhirsch•50m ago
# Tell the driver to completely ignore the NVLINK and it should allow the GPUs to initialise independently over PCIe !!!! This took a week of work to find, thanks Reddit!

I needed this info, thanks for putting it up. Can this really be an issue for every data center?

dauertewigkeit•48m ago
It's a very interesting read, but a lot is not clear.

How does the seller get these desktops directly from NVIDIA?

And if the seller's business is custom made desktop boxes, why didn't he just fit the two H100s into a better desktop box?

Ntrails•25m ago
> why didn't he just fit the two H100s into a better desktop box?

I expect because they were no longer in the sort of condition to sell as new machines? They were clearly well used and selling "as seen" is the lowest reputational risk associated with offload

volf_•2h ago
That's awesome.

These are the best kinds of posts

BizarroLand•2h ago
Yep. Just enough to inspire jealousy while also saying it's possible
arein3•2h ago
It's practically free
ChrisArchitect•2h ago
Maybe the title could be I bought an Nvidia server..... to avoid confusion that it's something to do with Grace Hopper the person, and her servers ...or mainframes?
walrus01•2h ago
Grace Hopper is the Nvidia product code name for the chip, much like how Intel cpus were named after rivers, etc

https://www.google.com/search?client=firefox-b-m&q=grace%20h...

dnhkng•1h ago
Makes sense. I'm so used to the naming I forgot it's not common knowledge. I hope the new title is clearer.
mrose11•1h ago
This is freaking cool. Nice job!
Philpax•1h ago
You lucky dog. Have fun!
albertgoeswoof•1h ago
What inference performance are you getting on this with llama?

How long would it take to recoup the cost if you made the model available for others to run inference at the same price as the big players?

20after4•59m ago
Deal of the century.
skizm•57m ago
Serious question: does this thing actually make games run really great? Or are they so optimized for AI/ML workloads that they either don’t work or run normal video games poorly?

Also:

> I arrived at a farmhouse in a small forest…

Were you not worried you were going to get murdered?

jaggirs•51m ago
I believe these gpus dont have direct hdmi/DisplayPort outputs, so at the very least its tricky to even run a game on them, I guess you need to run the game in a VM or so?
mrandish•47m ago
> does this thing actually make games run really great

It's an interesting question, and since OP indicates he previously had a 4090, he's qualified to reply and hopefully will. However, I suspect the GH200 won't turn out to run games much faster than a 5090 because A) Games aren't designed to exploit the increased capabilities of this hardware, and B) The GH200 drivers wouldn't be tuned for game performance. One of the biggest differences of datacenter AI GPUs is the sheer memory size, and there's little reason for a game to assume there's more than 16GB of video memory available.

More broadly, this is a question that, for the past couple decades, I'd have been very interested in. For a lot of years, looking at today's most esoteric, expensive state-of-the-art was the best way to predict what tomorrow's consumer desktop might be capable of. However, these days I'm surprised to find myself no longer fascinated by this. Having been riveted by the constant march of real-time computer graphics from the 90s to 2020 (including attending many Siggraph conferences in the 90s and 00s), I think we're now nearing the end of truly significant progress in consumer gaming graphics.

I do realize that's a controversial statement, and sure there will always be a way to throw more polys, bigger textures and heavier algorithms at any game, but... each increasing increment just doesn't matter as much as it once did. For typical desktop and couch consumer gaming, the upgrade from 20fps to 60fps was a lot more meaningful to most people than 120fps to 360fps. With synthetic frame and pixel generation, increasing resolution beyond native 4K matters less. (Note: head-mounted AR/VR might one of the few places 'moar pixels' really matters in the future). Sure, it can look a bit sharper, a bit more varied and the shadows can have more perfect ray-traced fall-off, but at this point piling on even more of those technically impressive feats of CGI doesn't make the game more fun to play, whether on a 75" TV at 8 feet or a 34-inch monitor at two feet. As an old-school computer graphics guy, it's incredible to be see real-time path tracing adding subtle colors to shadows from light reflections bouncing off colored walls. It's living in the sci-fi future we dreamed of at Siggraph '92. But as a gamer looking for some fun tonight, honestly... the improved visuals don't contribute much to the overall gameplay between a 3070, 4070 and 5070.

Scene_Cast2•23m ago
I'd guess that the datacenter "GPUs" lack all the fixed-function graphics hardware (texture samplers, etc) that's still there in modern consumer GPUs.
Havoc•41m ago
>Serious question: does this thing actually make games run really great?

LTT tried it in one of their videos...forgot which card but one of the serious nvidia AI cards.

...it runs like shit for gaming workloads. It does the job but comfortably beaten by a mid tier consumer card for 1/10th the price

Their AI track datacenter cards are definitely not same thing different badge glued on

tigranbs•49m ago
Ah, that's the best way to spend ~10K
systemtest•39m ago
Love how a €7.5k 20 kilogram server is placed on a €5 particleboard table. I have owned several LACKs but would never put anything valuable on it. IKEA rates them at 25 kilogram maximum load.
djoldman•36m ago
> Getting the actual GPU working was also painful, so I’ll leave the details here for future adventurers:

> # Data Center/HGX-Series/HGX H100/Linux aarch64/12.8 seem to work! wget https://us.download.nvidia.com/tesla/570.195.03/NVIDIA-Linux...

> ...

Nothing makes you feel more "I've been there" than typing inscrutable arcana to get a GPU working for ML work...

hollow-moe•26m ago
For that price ? The bubble already popped for sure !
Frannky•21m ago
Wow! Kudos for thinking it was possible and making it happen. I was wondering how long it would be before big local models were possible under 10k—pretty impressive. Qwen3-235B can do mundane chat, coding, and agentic tasks pretty well.
ionwake•21m ago
inspiring! is there an ip i can connect to test the inference speed?
m4r1k•15m ago
Wow! As others have said, deal of the century!! As a side note, a few years back, I used to scrape eBay for Intel QS Xeon and quite a few times managed to snag incredible deals, but this is beyond anything anyone has ever achieved!