https://www.scan.co.uk/products/asus-ascent-gx10-desktop-ai-...
Asus make some really useful things, but the v1 Tinker Board was really a bit problem-ridden, for example. This is similarly way out on the edge of their expertise; I'm not sure I'd buy an out-there Asus v1 product this expensive.
(Edit: GB of course, not MB, thanks buildbot)
I've finetuned diffusion models streaming from an SSD without noticeable speed penalty at high enough batchsize.
Should you get one? #
It’s a bit too early for me to provide a confident recommendation concerning this machine. As indicated above, I’ve had a tough time figuring out how best to put it to use, largely through my own inexperience with CUDA, ARM64 and Ubuntu GPU machines in general.
The ecosystem improvements in just the past 24 hours have been very reassuring though. I expect it will be clear within a few weeks how well supported this machine is going to be.The Spark's GPU gets ~4x the FP16 compute performance of an M3 Ultra GPU on less than half the Mac Studio's total TDP.
You do get about twice as much memory bandwidth out of the Mac though.
1- I vote with my wallet, do I want to pay a company to be my digital overlord, doing everything they can to keep me inside their ecosystem? I put too much effort to earn my freedom to give it up that easily.
2- Software: Almost certainly, I would want to run linux on this. Do I want to have something that has or eventually will have great mainstream linux support, or something with closed specs that people in Asahi try to support with incredible skills and effort? I prefer the system with openly available specs.
I've extensively used mac, iphone, ipad over time. The only apple device I ever bought was an ipad, and I would never buy it, if I knew they deliberately disable multitasking on it.
How can a serious company not notice these glaring issues in their websites?
Its not that they dont notice.
They dont care.
But I'm not surprised, this is ASUS. As a company, they don't really seem to care about software quality.
>> What is the memory bandwidth supported by Ascent GX10?
> AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.
You don't have to wonder: I bet they're using generative AI to speed up delivery velocity.
> What is the memory bandwidth supported by Ascent GX10? AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.
Which is appropriate, given the applications!
I see that they mention it uses LPDDR5x, so bandwidth will not be nearly as fast as something using HBM or GDDR7, even if bus width is large.
Edit: I found elsewhere that the GB10 has a 256bit L5X-9400 memory interface, allowing for ~300GB/sec of memory bandwidth.
> What is the memory bandwidth supported by Ascent GX10?
> AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.
Never seen anything like that before. I wonder if this product page is actually done and was ready to be public?
Fortunately, their products are also easy to crack open and probe.
You're free to lift the kernel and any drivers/libraries and run them on your distribution of choice, it'll just be hacky.
The kernel is patched (and maintained by Canonical, not Nvidia) but the patches hanging off their 6.17-next branch didn't look outrageous to me. The main hitch right now is that upstream doesn't have a Realtek r8127 driver for the ethernet controller. There were also some mediatek-related patches that were probably relevant as they designed the CPU die.
Overall it feels close to full upstream support (to be clear: you CAN boot this system with a fully upstream kernel, today). And booting with UEFI means you can just use the nvidia patches on $YOUR_FAVORITE_DISTRO and reboot, no need to fiddle with or inject the proper device trees or whatever.
Plus ofc software stack for gaming on this isn’t available
1) This still has raster hardware, even ray tracing cores. It's not technically an "AI focused card" like the AMD Instinct hardware or Nvidia's P40-style cards.
2) It kinda does have a stack. ARM is the hardest part to work around, but Box86 will get the older DirectX titles working. The GPU is Vulkan compliant too, so it should be able to leverage Proton/DXVK to accommodate the modern titles that don't break on ARM.
The tough part is the price. I don't think ARM gaming boxes will draw many people in with worse performance at a higher price.
I am still trying to think a use case that a Ryzen AI Max/MacBook or a plain gaming gpu cannot cover.
Last few jobs I've had were for binaries compiled to target ARM AArch64 SBC devices, and cross compiling was sometimes annoying, and you couldn't truly eat your own dogfood on workstations as there's subtle things around atomics and memory consistency guarantees that differ between ISAs.
Mac M series machines are an option except that then you're not running Linux, except in VM, and then that's awkward too. Or Asahi which comes with its own constraints.
Having a beefy ARM machine at my desk natively running Linux would have pleased me greatly. Especially if my employer was paying for it.
At least with this, you get to pay both the Nvidia and the Asus tax!
- Price: $3k / $5k
- Memory: same (128GB)
- Memory bandwidth: ~273GB/s / 546GB/sec
- SSD: same (1 TB)
- GPU advantage: ~5x-10x depending on memory bottleneck
- Network: same 10Gbe (via TB)
- Direct cluster: 200Gb / 80Gb
- Portable: No / Yes
- Free Mac included: No / Yes
- Free monitor: No / Yes
- Linux out of the box: Yes / No
- CUDA Dev environment: Yes : No
W.r.t ip, the fastest I’m aware of is 25Gb/s via TB5 adapters like from Sonnet.
The Asus clustering speed is not limited to p2p.
How is the monitor "free" if the Mac costs more?
For homelab use, this is the only thing that matters to me.
They're not the best choice for anyone who wants to run LLMs as fast and cheap as possible at home. Think of it like a developer tool.
These boxes are confusing the internet because they've let the marketing teams run wild (or at least the marketing LLMs run wild) trying to make them out to be something everyone should want.
How great would it be if instead of shoving these bots to help decipher the marketing speak they just had the specs right up front?
What is the purpose of this thing?
Please give me a good old html table with specs will ya?
The Asus Ascent GX10 a Nvidia GB10 Mini PC with 128GB of Memory and 200GbE - https://news.ycombinator.com/item?id=43425935 - March 2025 (50 comments)
https://news.ycombinator.com/item?id=45586776
https://news.ycombinator.com/item?id=45008434
https://news.ycombinator.com/item?id=45713835
https://news.ycombinator.com/item?id=45575127
https://news.ycombinator.com/item?id=45611912
To get a sense for use cases, see the playbooks on this website: https://build.nvidia.com/spark.
Regarding limited memory bandwidth: my impression is that this is part of the onramp for the DGX Cloud. Heavy lifting/production workloads will still need to be run in the cloud.
Also, NVIDIA's software they have you install on another machine to use it is garbage. They tried to make it sort of appliance-y but most people would rather just have SSH work out of the box and can go from there. IMO just totally unnecessary. The software aspect was what put me over the edge.
Maybe the gen 2 will be better, but unless you have a really specific use case that this solves well, buy credits or something somewhere else.
simlevesque•2h ago
hamdingers•2h ago
uyzstvqs•1h ago