These are the best kinds of posts
https://www.google.com/search?client=firefox-b-m&q=grace%20h...
How long would it take to recoup the cost if you made the model available for others to run inference at the same price as the big players?
Assumptions:
Batch 4x and get 400 tokens per second and push his power consumption to 900W instead of the underutilized 300W.
Electricity around €0.2/kWhr.
Tokens valued at €1/1M out.
Assume ~70% utilization.
Result:
You get ~1M tokens per hour which is a net profit of ~€0.8/hr. Which is a payoff time of a bit over a year or so given the €9K investment.
Honestly though there is a lot of handwaving here. The most significant unknown is getting high utilization with aggressive batching and 24/7 load.
Also the demand for privacy can make the utility of the tokens much higher than typical API prices for open source models.
In a sort of orthogonal way renting 2 H100s costs around $6 per hour which makes the payback time a bit over a couple months.
GLM 4.5 Air, to be precise. It's a smaller 166B model, not the full 355B one.
Worth mentioning when discussing token throughput.
It will fit in system RAM, and as its mixture of experts and the experts are not too large, I can at least run it. Token/second speed will be slower, but as system memory bandwidth is somewhere around 5-600Gb/s, so it should feel OK.
I think there are probably Law Firms/doctors offices that would gladly pay ~3-4K euro a month to have this thing delivered and run truely "on-prem" to work with documents they can't risk leaking (patent filings, patient records etc).
For a company with 20-30 people, the legal and privacy protection is worth the small premium over using cloud providers.
Just a hunch though! This would have it paid-off in 3-4 months?
Lets continue to hope
Also:
> I arrived at a farmhouse in a small forest…
Were you not worried you were going to get murdered?
People have gotten games to run on a DGX Spark, which is somewhat similar (GB10 instead of GH200)
In AMD I’ve read it works great, but for NVIDIA chips, in mouse heavy games, it becomes unusable for me.
It's an interesting question, and since OP indicates he previously had a 4090, he's qualified to reply and hopefully will. However, I suspect the GH200 won't turn out to run games much faster than a 5090 because A) Games aren't designed to exploit the increased capabilities of this hardware, and B) The GH200 drivers wouldn't be tuned for game performance. One of the biggest differences of datacenter AI GPUs is the sheer memory size, and there's little reason for a game to assume there's more than 16GB of video memory available.
More broadly, this is a question that, for the past couple decades, I'd have been very interested in. For a lot of years, looking at today's most esoteric, expensive state-of-the-art was the best way to predict what tomorrow's consumer desktop might be capable of. However, these days I'm surprised to find myself no longer fascinated by this. Having been riveted by the constant march of real-time computer graphics from the 90s to 2020 (including attending many Siggraph conferences in the 90s and 00s), I think we're now nearing the end of truly significant progress in consumer gaming graphics.
I do realize that's a controversial statement, and sure there will always be a way to throw more polys, bigger textures and heavier algorithms at any game, but... each increasing increment just doesn't matter as much as it once did. For typical desktop and couch consumer gaming, the upgrade from 20fps to 60fps was a lot more meaningful to most people than 120fps to 360fps. With synthetic frame and pixel generation, increasing resolution beyond native 4K matters less. (Note: head-mounted AR/VR might one of the few places 'moar pixels' really matters in the future). Sure, it can look a bit sharper, a bit more varied and the shadows can have more perfect ray-traced fall-off, but at this point piling on even more of those technically impressive feats of CGI doesn't make the game more fun to play, whether on a 75" TV at 8 feet or a 34-inch monitor at two feet. As an old-school computer graphics guy, it's incredible to be see real-time path tracing adding subtle colors to shadows from light reflections bouncing off colored walls. It's living in the sci-fi future we dreamed of at Siggraph '92. But as a gamer looking for some fun tonight, honestly... the improved visuals don't contribute much to the overall gameplay between a 3070, 4070 and 5070.
LTT tried it in one of their videos...forgot which card but one of the serious nvidia AI cards.
...it runs like shit for gaming workloads. It does the job but comfortably beaten by a mid tier consumer card for 1/10th the price
Their AI track datacenter cards are definitely not same thing different badge glued on
he had left a trail of breadcrumbs. although he was hungry, it seemed a prudent precaution.
> # Data Center/HGX-Series/HGX H100/Linux aarch64/12.8 seem to work! wget https://us.download.nvidia.com/tesla/570.195.03/NVIDIA-Linux...
> ...
Nothing makes you feel more "I've been there" than typing inscrutable arcana to get a GPU working for ML work...
We'll see how it goes, but what _is_ happening is ram replacement. Nvidia 5090's with 96GB are somewhat a thing now. $4K. YMMV, caveat emptor. https://www.alibaba.com/product-detail/Newest-RTX-5090-96gb-...
> 4x Arctic Liquid Freezer III 420 (B-Ware) - €180
Quite aside, but man: I fricking love Arctic. Seeing their fans in the new Corsi-Rosenthal boxes has been awesome. Such good value. I've been sing a Liquid Freeze II after nearly buying my last air-cooled heat-sink & seeing the LF-II onsale for <$75. Buy.
Please give us some power consumption figures! I'm so curious how it scales up and down. Do different models take similar or different power? Asking a lot, but it'd be so neat to see a somewhat high res view (>1 sample/s) of power consumption (watts) on these things, such a unique opportunity.
Most of them are in California? Anything in NY/NJ
There should be some all over the country.
GPUs have such a short liefspan these days that it is really important to compare new vs. used.
I had 4x 4090, that I had bought for about $2200 each in early 2023. I sold 3 of them to help pay for the GH200, and got 2K each.
The Blackwells are superior on paper, but there's some "Nvidia Math" involved: When they report performance in press announcements, they don't usually mention the precision. Yes, the Blackwells are more than double the speed of the Hopper H100's, but thats comparing FP8 to FP4 (the H100's can't do native FP4). Yes, thats great for certain workloads, but not the majority.
What's more interesting is the VRAM speed. The 6000 Pro has 96 GB of GPU memory and 1.8 TB/s bandwidth, the H100 haas the same amount, but with HBM3 at 4.9 TB/s. That 2.5X increase is very influential in the overall performance of the system.
Lastly, if it works, the NVLink-C2C does 900 GB/s of bandwidth between the cards, so about 5x what a pair of 6000 Pros could do over PCIE5. Big LLMs need well over the 96 GB on a single card, so this becomes the bottleneck.
e.g. Here are benchmarks on the RTX 6000 pro using the GPT-OSS-120B model, where it generates 145 tokens/sec, and I get 195 tokens/sec on the GH200. https://www.reddit.com/r/LocalLLaMA/comments/1mm7azs/openai_...
The NVLink is definitely a strong point, I missed that detail. For LLM inference specifically it matters fairly little iirc, but for training it might.
Pre-story: For 3 years I wanted to build a rack-gaming-server, so I can play with my son in our small apartment where we don't have enough space for a gaming computer (wife also doesn't allow it). I have a stable IPsec connection to my parents house, where I have a powerfull PV plant (90kWp) and a rack server, for my freelance job.
Fast forward to 2 months ago, I see a Supermicro SYS-7049GP-TRT for 1400€ on Ebay. It looks clean, sold by some IT reuse-warehouse. No desription, just 3 photos and the case label. I ask the seller whether he knows whats in it and he says he didn't check. The case alone comes new at 3k here in Germany. I buy it.
It arrives. 64GB ECC memory, 2x Xeon silver, 1x 500GB SSD, 5x GBit LAN Cards. Dual 2200 Watt PowerSupply. I remove the airshroud, and: A Nvidia V100S 32GB emerges. I sell the card on ebay for 1600€ and buy 2x Xeon 6254 CPUs (100€ each) to replace the 2x Silver ones that are in it. Last week, I bought two Blackwell RTX 4000 Pro for 1100€ each. Enough for gaming with my son! (and I can do some fun with LLMs and home assistant/smart home..)
The case fits 4x dual-size GPUs, so I could fit 4x RTX 6000 in it (384GB VRAM). At a price of 3k, this would come at 12k (still too much for me.. but let's check back in a couple of years..).
Buying used enterprise gear is fun. I had so many good experiences and this stuff is just rock solid.
Good one
SCNR
Nice find, and I admire your courage for even attempting this!
I found interesting to learn there are businesses around converting used servers into desktops. Sounds like a good initiative to avoid some e-waste (assuming the desktops are easy to maintain).
You really need a special server cabinet and HVAC for these kind of beasts. But you've need them for training, right
dnhkng•1mo ago
ipsum2•1mo ago
amirhirsch•1mo ago
I needed this info, thanks for putting it up. Can this really be an issue for every data center?
Tinyyy•1mo ago
dnhkng•1mo ago
dauertewigkeit•1mo ago
How does the seller get these desktops directly from NVIDIA?
And if the seller's business is custom made desktop boxes, why didn't he just fit the two H100s into a better desktop box?
Ntrails•1mo ago
I expect because they were no longer in the sort of condition to sell as new machines? They were clearly well used and selling "as seen" is the lowest reputational risk associated with offload
renewiltord•1mo ago
wtallis•1mo ago
dnhkng•1mo ago
This thing too unwieldy to make into a desktop (you can see how much effort it took), and was in pretty bad condition. I think he just wanted to get rid of it without having to deal with returns. I took a bet on it, and was lucky it paid out.
GPTshop•1mo ago
H100 PCI and GH200 are two very different things. The advantages of Grace Hopper are much higher connections speeds, bandwidth and lower power consumption.
pointbob•1mo ago
dnhkng•1mo ago
DANmode•1mo ago
Hackaday would probably welcome you.
ProAm•1mo ago
Fire-Dragon-DoL•1mo ago
jerome-jh•1mo ago
dnhkng•1mo ago
baud147258•1mo ago
devilbunny•1mo ago
leipert•1mo ago
devilbunny•1mo ago
baud147258•1mo ago
devilbunny•1mo ago
In Germany, where large items are often purchased with cash, it would be unremarkable if you did it several times a year.
dnhkng•1mo ago