My job just got me and our entire team a DGX spark. I'm impressed at the ease of use for ollama models I couldn't run on my laptop. gpt-oss:120b is shockingly better than what I thought it would be from running the 20b model on my laptop.
The DGX has changed my mind about the future being small specialized models.
Are you shocked because that isn't your experience?
From the article it sounds like ollama runs cpu inference not GPU inference. Is that the case for you?
I was not familiar with the hardware, so I was disappointed there wasn't a picture of the device. Tried to skim the article and it's a mess. Inconsistent formatting and emoji without a single graph to visualize benchmarks.
I bet the input to the LLM would have been more interesting.
I have H100s to myself, and access to more GPUs than I know what to do with in national clusters.
The Spark is much more fun. And I’m more productive. With two of them, you can debug shallow NCCL/MPI problems before hitting a real cluster. I sincerely love Slurm, but nothing like a personal computer.
As for debugging, that's where you should be allowed to spin up a small testing cluster on-demand. Why can't you do that with your slurm access?
But please have your LLM post writer be less verbose and repetitive. This is like the stock output from any LLM, where it describes in detail and then summarizes back and forth over multiple useless sections. Please consider a smarter prompt and post-editing…
There are official benchmarks of the Spark running multiple models just fine on llama.cpp
I haven't exactly bisected the issue but I'm pretty sure convolutions are broken on sm_121 after a certain size, getting 20x memory blowup from a convolution from a 2x batch size increase _only_ on the DGX Spark.
I haven't had any problems with inference, but I also don't use the transformers library that much.
llama.cpp was working for openai-oss last time I checked and on release, not sure if something broke along the way.
I don't exactly know if memory fragmentation is something fixable on the driver side - this might just be the problem with kernel's policy and GPL, it prevents them from automatically interfering with the memory subsystem to the granularity they'd like - see zfs and their page table antics - or so my thoughts on it is.
If you've done stuff on WSL, you have similar issues and you can fix it by running a service that normally compacts and clean memory, I have it run every hour. Note that this does impact at the very least CPU performance and memory allocation speeds, but I have not have any issue with long training runs with it (24hr+, assuming that is the issue, I have never tried without it and put that service in place since getting it due to my experience on WSL).
Wow. Where do I sign up?
It would be cheaper to buy up a dozen 3060s and build a custom PC around them than to buy the Spark.
Given the extreme advantage they have with CUDA and the whole AI/ML ecosystem, barely matching Apple’s M-ultra speeds is a choice…
Apple benchmarks: https://github.com/ggml-org/llama.cpp/discussions/4167
(cited from https://lmsys.org/blog/2025-10-13-nvidia-dgx-spark/)
The userspace side is where AI is difficult with AMD. Almost all of the community is build around Nvidia tooling first, others second (if it all).
Practically, if the goal is 100% about AI and cloud isn't an option for some reason, both options are likely "a great way to waste a couple grand trying to save a couple grand" as you'd get 7x the performance and likely still feel it's a bit slow on larger models using an RTX Pro 6000. I say this as a Ryzen AI Max+ 395 owner, though I got mine because it's the closest thing to an x86 Apple Silicon laptop one can get at the moment.
It is also a standard UEFI+ACPI system; one Reddit user even reported that they were able to boot up Fedora 42 and install the open kernel modules no problem. The overall delta/number of specific patches for the Canonical 6.17-nvidia tree is pretty small when I looked (the current kernel is 6.11). That and the likelihood the consumer variant will support Windows hopefully bodes well for its upstream Linux compatibility, I hope.
To be fair, most of this also true of Strix Halo from what I can tell (most benchmarks put the DGX furthest ahead at prompt processing and a bit ahead at raw token output. But the software is still buggy and Blackwell is still a bumpy ride overall, so it might get better). But I think it's mostly the pricing that is holding it back. I'm curious what the consumer variant will be priced at.
https://www.anaconda.com/blog/python-nvidia-dgx-spark-first-...
Isin't this the same architecture that the Mx from Apple implements from a memory perspective?
Two things need to happen for me to get excited about this:
1. It stimulates other manufacturers into building their own DGX-class workstations.
2. This all eventually gets shipped in a decent laptop product.
As much as it pains me, until that happens, it still seems like Apple Sillicon is the more viable option, if not the most ethical.
Besides that though, I don't see how Nvidia is particularly non-ethical. They cooperate with Khronos, provide high-quality Linux and BSD drivers free of charge, and don't deliberately block third parties from writing drivers to support new standards. From a relativist standpoint that's as sanctimonious as server hardware gets.
Specifically WRT Mellanox, Nvidia's behavior was more petty than callous.
ARM64 Architecture: Not x86_64 (limited ML ecosystem maturity) No PyTorch wheels for ARM64+CUDA (must use Docker) Most ML tools optimized for x86
No evidence for any of this whatsoever. The author just asked Claude/claude code to write their article and it just plain hallucinated some rubbish.
RyeCatcher•3h ago