frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

1.5 TB of VRAM on Mac Studio – RDMA over Thunderbolt 5

https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-studio-rdma-over-thunderbolt-5
107•rbanffy•2h ago

Comments

behnamoh•1h ago
My expectations from M5 Max/Ultra devices:

- Something like DGX QSFP link (200Gb/s, 400Gb/s) instead of TB5. Otherwise, the economies of this RDMA setup, while impressive, don't make sense.

- Neural accelerators to get prompt prefill time down. I don't expect RTX 6000 Pro speeds, but something like 3090/4090 would be nice.

- 1TB of unified memory in the maxed out version of Mac Studio. I'd rather invest in more RAM than more devices (centralized will always be faster than distributed).

- +1TB/s bandwidth. For the past 3 generations, the speed has been 800GB/s...

- The ability to overclock the system? I know it probably will never happen, but my expectation of Mac Studio is not the same as a laptop, and I'm TOTALLY okay with it consuming +600W energy. Currently it's capped at ~250W.

Also, as the OP noted, this setup can support up to 4 Mac devices because each Mac must be connected to every other Mac!! All the more reason for Apple to invest in something like QSFP.

tylerflick•1h ago
> TOTALLY okay with it consuming +600W energy

The 2019 i9 Macbook Pro has entered the chat.

burnt-resistor•1h ago
Apple has always sucked at properly embracing properly robust tech for high-end gear for markets outside of individual prosumer or creatives. When Xserves existed, they used commodity IDE drives without HA or replaceable PSUs that couldn't compete with contemporary enterprise servers (HP-Compaq/Dell/IBM/Fujitsu). Xserve RAID interconnection half-heartedly used fiber channel but couldn't touch a NetApp or EMC SAN/filer. I'm disappointed Apple has a persistent blindspot preventing them from succeeding in data center-quality gear category when they could've had virtualized servers, networking, and storage, things that would eventually find their way into my home lab after 5-7 years.
angoragoats•52m ago
> Also, as the OP noted, this setup can support up to 4 Mac devices because each Mac must be connected to every other Mac!! All the more reason for Apple to invest in something like QSFP.

This isn’t any different with QSFP unless you’re suggesting that one adds a 200GbE switch to the mix, which:

* Adds thousands of dollars of cost,

* Adds 150W or more of power usage and the accompanying loud fan noise that comes with that,

* And perhaps most importantly adds measurable latency to a networking stack that is already higher latency than the RDMA approach used by the TB5 setup in the OP.

fenced_load•33m ago
Mikrotik has a switch that can do 6x200g for ~$1300 and <150W.

https://www.bhphotovideo.com/c/product/1926851-REG/mikrotik_...

zozbot234•52m ago
> Neural accelerators to get prompt prefill time down.

Apple Neural Engine is a thing already, with support for multiply-accumulate on INT8 and FP16. AI inference frameworks need to add support for it.

> this setup can support up to 4 Mac devices because each Mac must be connected to every other Mac!!

Do you really need a fully connected mesh? Doesn't Thunderbolt just show up as a network connection that RDMA is ran on top of?

fooblaster•43m ago
Might be helpful if they actually provided a programming model for ANE that isn't onnx. ANE not having a native development model just means software support will not be great.
liuliu•23m ago
They were talking about neural accelerators (a silicon piece on GPU): https://releases.drawthings.ai/p/metal-flashattention-v25-w-...
csdreamer7•16m ago
> Apple Neural Engine is a thing already, with support for multiply-accumulate on INT8 and FP16. AI inference frameworks need to add support for it.

Or, Apple could pay for the engineers to add it.

Dylan16807•51m ago
> +1TB/s bandwidth. For the past 3 generations, the speed has been 800GB/s...

M4 already hit the necessary speed per channel, and M5 is well above it. If they actually release an Ultra that much bandwidth is guaranteed on the full version. Even the smaller version with 25% fewer memory channels will be pretty close.

We already know Max won't get anywhere near 1TB/s since Max is half of an Ultra.

delaminator•1h ago
> Working with some of these huge models, I can see how AI has some use, especially if it's under my own local control. But it'll be a long time before I put much trust in what I get out of it—I treat it like I do Wikipedia. Maybe good for a jumping-off point, but don't ever let AI replace your ability to think critically!

It is a little sad that they gave someone an uber machine and this was the best he could come up with.

Question answering is interesting but not the most interesting thing one can do, especially with a home rig.

The realm of the possible

Video generation: CogVideoX at full resolution, longer clips

Mochi or Hunyuan Video with extended duration

Image generation at scale:

FLUX batch generation — 50 images simultaneously

Fine-tuning:

Actually train something — show LoRA on a 400B model, or full fine-tuning on a 70B

but I suppose "You have it for the weekend" means chatbot go brrrrr and snark

theshrike79•1h ago
Yea, I don't understand why people use LLMs for "facts". You can get them from Wikipedia or a book.

Use them for something creative, write a short story on spec, generate images.

Or the best option: give it tools and let it actually DO something like "read my message history with my wife, find top 5 gift ideas she might have hinted at and search for options to purchase them" - perfect for a local model, there's no way in hell I'd feed my messages to a public LLM, but the one sitting next to me that I can turn off the second it twitches the wrong way? - sure.

benjismith•25m ago
> show LoRA on a 400B model, or full fine-tuning on a 70B

Yeah, that's what I wanted to see too.

newsclues•1h ago
https://m.youtube.com/watch?v=4l4UWZGxvoc

Seems like the ecosystem is rapidly evolving

mmorse1217•1h ago
Hey Jeff, wherever you are: this is awesome work! I’ve wanted to try something like this for a while and was very excited for the RDMA over thunderbolt news.

But I mostly want to say thanks for everything you do. Your good vibes are deeply appreciated and you are an inspiration.

rahimnathwani•1h ago
The largest nodes in his cluster each have 512GB RAM. DeepSeek V3.1 is a 671B parameter model whose weights take up 700GB RAM: https://huggingface.co/deepseek-ai/DeepSeek-V3.1

I would have expected that going from one node (which can't hold the weights in RAM) to two nodes would have increased inference speed by more than the measured 32% (21.1t/s -> 27.8t/s).

With no constraint on RAM (4 nodes) the inference speed is less than 50% faster than with only 512GB.

Am I missing something?

zeusk•1h ago
the TB5 link (RDMA) is much slower than direct access to system memory
elorant•43m ago
You only get 80Gbps network bandwidth. There's your bottleneck right there. Infiniband in comparison can give you up to x10 times that.
zozbot234•41m ago
Weights are read-only data so they can just be memory mapped and reside on SSD (only a small fraction will be needed in VRAM at any given time), the real constraint is activations. MoE architecture should help quite a bit here.
hu3•32m ago
> only a small fraction will be needed in VRAM at any given time

I don't think that's true. At least not without heavy performance loss in which case "just be memory mapped" is doing a lot of work here.

By that logic GPUs could run models much larger than their VRAM would otherwise allow, which doesn't seem to be the case unless heavy quantization is involved.

zozbot234•6m ago
Existing GPU API's are sadly not conducive to this kind of memory mapping with automated swap-in. The closest thing you get AIUI is "sparse" allocations in VRAM, in which only a small fraction of your "virtual address space" equivalent is mapped to real data, and the mapping can be dynamic.
Dylan16807•22m ago
You need all the weights every token, so even with optimal splitting the fraction of the weights you can farm out to an SSD is proportional to how fast your SSD is compared to your RAM.

You'd need to be in a weirdly compute-limited situation before you can replace significant amounts of RAM with SSD, unless I'm missing something big.

> MoE architecture should help quite a bit here.

In that you're actually using a smaller model and swapping between them less frequently, sure.

lvl155•52m ago
Seriously, Jeff has the best job. Him and STH Patrick.
geerlingguy•35m ago
I got to spend a day with Patrick this week, and try out his massive CyPerf testing rig with multiple 800 Gbps ConnectX-8 cards!
andy99•52m ago
Very cool, I’m probably thinking too much but why are they seemingly hyping this now (I’ve seen a bunch of this recently) with no M5 Max/Ultra machines in sight. Is it because their release is imminent (I have heard Q1 2026) or is it to try and stretch out demand for M4 Max / M3 Ultra. I plan to buy one (not four) but would feel like I’m buying something that’s going to be immediately out of date if I don’t wait for the M5.
GeekyBear•42m ago
I imagine that they want to give developers time to get their RDMA support stabilized, so third party software will be ready to take advantage of RDMA when the M5 Ultra lands.

I definitely would not be buying an M3 Ultra right now on my own dime.

fooblaster•34m ago
Does it actually creates a unified memory pool? it looks more like an accelerated backend for a collective communications library like nccl, which is very much not unified memory.
chis•45m ago
I wonder what motivates apple to release features like RDMA which are purely useful for server clusters, while ignoring basic qol stuff like remote management or rack mount hardware. It’s difficult to see it as a cohesive strategy.

Makes one wonder what apple uses for their own servers. I guess maybe they have some internal M-series server product they just haven’t bothered to release to the public, and features like this are downstream of that?

vsgherzi•42m ago
last I heard for the private compute features they were racking and stacking m2 mac pros
xienze•42m ago
> rack mount hardware

I guess they prefer that third parties deal with that. There’s rack mount shelves for Mac Minis and Studios.

jeffbee•37m ago
thunderbolt rdma is quite clearly the nuclear option for remote management.
rsync•36m ago
These are my own questions - asked since the first mac mini was introduced:

- Why is the tooling so lame ?

- What do they, themselves, use internally ?

Stringing together mac minis (or a "Studio", whatever) with thunderbolt cables ... Christ.

hamdingers•36m ago
> I guess maybe they have some internal M-series server product they just haven’t bothered to release to the public, and features like this are downstream of that?

Or do they have some real server-grade product coming down the line, and are releasing this ahead of it so that 3rd party software supports it on launch day?

Retr0id•17m ago
I wonder if there's any possibility that an RDMA expansion device could exist in the future - i.e. a box full of RAM on the other end of a thunderbolt cable. Although I guess such a device would cost almost as much as a mac mini in any case...

200 Years Ago: Abel's Resolution of the Quintic Question

https://www.ams.org/journals/notices/202601/noti3264/noti3264.html
1•bikenaga•5m ago•0 comments

Trump Is Doubling Down on His Disastrous A.I. Chip Policy

https://www.nytimes.com/2025/12/17/opinion/trump-ai-chips-nvidia-china.html
2•voxadam•7m ago•1 comments

Peter Higgs: I wouldn't be productive enough for today's academic system

https://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-academic-system
1•firefax•7m ago•0 comments

Why Do We Still Pay for International Calls in 2025?

https://rodyne.com/?p=3293
1•boznz•9m ago•0 comments

Six billionaires who could move markets, policy in 2026

https://nairametrics.com/2025/12/18/six-billionaires-who-could-move-markets-policy-in-2026/
1•kckkmgboji•12m ago•0 comments

DNS as a Filesystem: A Practical Study in Applied Category Theory

https://loss.dev/?node=honk-protocol
1•graemefawcett•14m ago•1 comments

Spaceorbust – Terminal RPG where GitHub commits power space civilization

https://spaceorbust.com
2•zjkramer•18m ago•2 comments

Data Science Weekly – Issue 630

https://datascienceweekly.substack.com/p/data-science-weekly-issue-630
1•sebg•21m ago•0 comments

New AI Tool That Helps with Meta Ads

https://www.audience-plus.com
1•alexTs101•21m ago•1 comments

Trmnl – 2025 in Review

https://usetrmnl.com/blog/2025-in-review
1•MBCook•23m ago•0 comments

Show HN: Roblox Python tower defense game

https://github.com/jackdoe/roblox-python-tower-defense
1•jackdoe•24m ago•0 comments

Fee-based primary care is rapidly rising in US, hastening doctor shortages

https://medicalxpress.com/news/2025-12-fee-based-primary-rapidly-hastening.html
2•bikenaga•27m ago•1 comments

Chemical Hygiene

https://karpathy.bearblog.dev/chemical-hygiene/
2•zdw•33m ago•0 comments

North Korean hackers stole a record $2B of crypto in 2025, Chainalysis says

https://www.coindesk.com/business/2025/12/18/north-korean-hackers-stole-a-record-usd2b-of-crypto-...
4•hhs•33m ago•0 comments

How to Use AI as a Real Software Engineering Tool

https://chat.engineer/p/how-to-use-ai-as-a-real-software-engineering-tool
2•olh•34m ago•0 comments

Show HN: Patch PHPUnit to shard your Laravel test suite

https://github.com/boltci/shards
1•matt413•42m ago•0 comments

Wall Street Ruined the Roomba and Then Blamed Lina Khan

https://www.thebignewsletter.com/p/how-wall-street-ruined-the-roomba
3•danboarder•44m ago•0 comments

Show HN: Infexec – A utility for pinning commands to terminal panes

https://github.com/Software-Deployed/infexec
2•indigophone•45m ago•0 comments

A Testing Conundrum

https://nedbatchelder.com/blog/202512/a_testing_conundrum.html
1•todsacerdoti•46m ago•0 comments

Show HN: CLI tools to browse Claude Code and Codex CLI logs interactively

1•hy_wondercoms•46m ago•0 comments

Show HN: TiliaJS FRP JavaScript/TypeScript/ReScript State Management

https://tiliajs.com
1•indigophone•47m ago•0 comments

Exploring the Swift SDK for Android

https://swift.org/blog/exploring-the-swift-sdk-for-android/
1•frizlab•47m ago•0 comments

Cocktail Distributed Key Generation

https://github.com/C2SP/C2SP/blob/main/cocktail-dkg.md
1•choult•48m ago•0 comments

Prediction Market Investors – Where Do I Find Them?

7•h100ker•49m ago•8 comments

Understanding Encoder and Decoder LLMs

https://magazine.sebastianraschka.com/p/understanding-encoder-and-decoder
1•jeffjeffbear•50m ago•0 comments

Show HN: Squache – A self-hosted HTTPS caching proxy for web scraping

https://github.com/devrupt-io/squache
2•devrupt•52m ago•0 comments

LinkedIn's war against bot scrapers ramps up as AI gets smarter

https://news.bloomberglaw.com/artificial-intelligence/linkedins-war-against-bot-scrapers-ramps-up...
1•hhs•54m ago•0 comments

Once Again, Health Care Proves to Be a Bitter Political Pill for GOP

https://www.nytimes.com/2025/12/18/us/politics/health-care-gop.html
2•duxup•56m ago•5 comments

Show HN: Git repo visualization and interactive stars and commits history

https://git-history.com/
2•rohitghumare•56m ago•0 comments

Property-Based Testing Caught a Security Bug I Never Would Have Found

https://kiro.dev/blog/property-based-testing-fixed-security-bug/
2•nslog•56m ago•0 comments