A couple of examples:
Kimi K2 Thinking (1 trillion parameters): https://x.com/awnihannun/status/1986601104130646266
DeepSeek R1 (671B): https://x.com/awnihannun/status/1881915166922863045 - that one came with setup instructions in a Gist: https://gist.github.com/awni/ec071fd27940698edd14a4191855bba...
The release in Tahoe 26.2 will enable us to do fast tensor parallelism in MLX. Each layer of the model is sharded across all machines. With this type of parallelism you can get close to N-times faster for N machines. The main challenge is latency since you have to do much more frequent communication.
The way it typically works in an attention block is: smaller portions of the Q, K and V linear layers are assigned to each node and are processed independently. Attention, rope norm etc is run on the node-specific output of that. Then, when the output linear layer is applied an "all reduce" is computed which combines the output of all the nodes.
EDIT: just realized it wasn't clear -- this means that each node ends up holding a portion of the KV cache specific to its KV tensor shards. This can change based on the specific style of attention (e.g., in GQA where there are fewer KV heads than ranks you end up having to do some replication etc)
I am asking, however, is whether that will speed up decoding as linearly as it would for prefilling.
Hopefully this makes it really nice for people that want the experiment with LLMs and have a local model but means well funded companies won’t have any reason to grab them all vs GPUs.
Using more smaller nodes means your cross-node IO is going to explode. You might save money on your compute hardware, but I wouldn't be surprised if you'd end up with an even greater cost increase on the network hardware side.
1. The power button is in an awkward location, meaning rackmounting them (either 10" or 19" rack) is a bit cumbersome (at best)
2. Thunderbolt is great for peripherals, but as a semi-permanent interconnect, I have worries over the port's physical stability... wish they made a Mac with QSFP :)
3. Cabling will be important, as I've had tons of issues with TB4 and TB5 devices with anything but the most expensive Cable Matters and Apple cables I've tested (and even then...)
4. macOS remote management is not nearly as efficient as Linux, at least if you're using open source / built-in tooling
To that last point, I've been trying to figure out a way to, for example, upgrade to macOS 26.2 from 26.1 remotely, without a GUI, but it looks like you _have_ to use something like Screen Sharing or an IP KVM to log into the UI, to click the right buttons to initiate the upgrade.
Trying "sudo softwareupdate -i -a" will install minor updates, but not full OS upgrades, at least AFAICT.
https://www.owc.com/solutions/thunderbolt-dock
It's a poor imitation of old ports that had screws on the cables, but should help reduce inadvertent port stress.
The screw only works with limited devices (ie not the Mac Studio end of the cord) but it can also be adhesive mounted.
See for example:
Apparently since 2016 https://www.usb.org/sites/default/files/documents/usb_type-c...
So for any permanent Thunderbolt GPU setups, they should really be using this type of cable
I’m not sure if it would be of much utility because this would presumably be for tensor parallel workloads. In that case you want the ranks in your cluster to be uniform or else everything will be forced to run at the speed of the slowest rank.
You could run pipeline parallel but not sure it’d be that much better than what we already have.
Here’s a text edition: For $50k the inference hardware market forces a trade-off between capacity and throughput:
* Apple M3 Ultra Cluster ($50k): Maximizes capacity (3TB). It is the only option in this price class capable of running 3T+ parameter models (e.g., Kimi k2), albeit at low speeds (~15 t/s).
* NVIDIA RTX 6000 Workstation ($50k): Maximizes throughput (>80 t/s). It is superior for training and inference but is hard-capped at 384GB VRAM, restricting model size to <400B parameters.
To achieve both high capacity (3TB) and high throughput (>100 t/s) requires a ~$270,000 NVIDIA GH200 cluster and data center infrastructure. The Apple cluster provides 87% of that capacity for 18% of the cost.
* They already cleared the first hurdle to adoption by shoving inference accelerators into their chip designs by default. It’s why Apple is so far ahead of their peers in local device AI compute, and will be for some time.
* I suspect this introduction isn’t just for large clusters, but also a testing ground of sorts to see where the bottlenecks lie for distributed inference in practice.
* Depending on the telemetry they get back from OSes using this feature, my suspicion is they’ll deploy some form of distributed local AI inference system that leverages their devices tied to a given iCloud account or on the LAN to perform inference against larger models, but without bogging down any individual device (or at least the primary device in use)
For the endgame, I’m picturing a dynamically sharded model across local devices that shifts how much of the model is loaded on any given device depending on utilization, essentially creating local-only inferencing for privacy and security of their end users. Throw the same engines into, say, HomePods or AppleTVs, or even a local AI box, and voila, you’re golden.
nodesocket•1h ago
Razengan•1h ago
https://www.youtube.com/shorts/sx9TUNv80RE
heavyset_go•1h ago
masspro•1h ago
Edit: to be clear, macOS itself (Cocoa elements) is all SDR content and thus washed out.
Starmina•1h ago
nodesocket•1h ago
masspro•1h ago
kmeisthax•5m ago
Which, personally, I find to be extremely ugly and gross and I do not understand why they thought this was a good idea.
adastra22•1h ago
m-ack-toddler•1h ago