MacBook Pro M5 Pro (64GB RAM)
>...at 4.4+ tokens/second
That is even when it is using 4-bit quantization and it is still at that speed.
> The entire 209GB model streams from SSD through a custom Metal compute pipeline.
This is my main problem.
If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD.
Can't imagine using this in the long term right now, but improvements will follow. Still a great write up anyways.
How sure are you about that? I've never looked closer at how a large LLM with mixture of experts architecture switches between expert modules, but staying on roughly the same topic for the use (as it often would when editing the same codebase), I wouldn't be surprised to see the switches of composition are fairly rare, fairly small, and to the extent it happens it's repeated reads from the flash disk rather than writes it tends to cause.
I feel like whenever I'm trying to find information on which local models will work on my hardware, I have to overestimate because people don't know how to wait for things.
Also, reading data doesn't cause SSD wear.
Outside of that the SSD is idling.
Table 3 shows for K=4 experts an IO of 943 MB/Tok at 3.15 Tok/s giving an average IO of 2970 MB/s far below what the SSD could do.
I'm not sure, but not all expert weights are used immediately. Maybe they could do async reads for the down tensors parallelizing compute with IO.
Not sure if this works on Mac, I only tested my larger than RAM setup on Linux with io_uring O_DIRECT reads and I saw that about 20% of total reads do finish while my fused upgate matmul is already running.
Edit: Typos
You can use this approach with Intel Optane, which is wearout-resistant unlike NAND and can thus substitute for RAM. Last I checked, it was available quite cheap on the secondary market, ~$1/GB as opposed to ~$15/GB or more for DRAM. (Of course that's nowhere near as cheap as NAND, which is around ~$0.1/GB but quite wearout-prone with heavy writes.)
I've had great success (~20 t/s) running it on a M1 Ultra with room for 256k context. Here are some lm-evaluation-harness results I ran against it:
mmlu: 87.86%
gpqa diamond: 82.32%
gsm8k: 86.43%
ifeval: 75.90%
More details of my experience:- https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...
- https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...
- https://gist.github.com/simonw/67c754bbc0bc609a6caedee16fef8...
Overall an excellent model to have for offline inference.
homarp•1h ago