frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MessyData – Synthetic dirty data generator

https://github.com/sodadata/messydata
1•santiviquez•33s ago•0 comments

Tanker War

https://en.wikipedia.org/wiki/Tanker_war
1•softwaredoug•46s ago•0 comments

Helix 02 Living Room Tidy [video]

https://www.youtube.com/watch?v=CAdTjePDBfc
1•sgt•49s ago•0 comments

Un hack me now mate

1•Zelcius•1m ago•0 comments

Show HN: The Mog Programming Language

https://moglang.org
1•belisarius222•1m ago•0 comments

Show HN: OpenClix, Agent friendly, open-source retention tooling

https://github.com/openclix/openclix
1•jace_yoo•1m ago•0 comments

eInk wall remote for HomeAssistant – fed up with tablets and hacked Kindles

https://www.muros.ink/
1•prathammehta•1m ago•1 comments

Show HN: DocTracker – track client documents and send reminders

https://doctracker.app/en
1•bakabegemot•1m ago•0 comments

Models have some pretty funny attractor states

https://www.lesswrong.com/posts/mgjtEHeLgkhZZ3cEx/models-have-some-pretty-funny-attractor-states
1•semiquaver•2m ago•0 comments

Show HN: We built an MCP server so LLMs can self-correct against business rules

https://www.rynko.dev/mcp
1•ksrijith•3m ago•0 comments

Seldom: An Anonymity Network with Selective Deanonymization

https://dl.acm.org/doi/full/10.1145/3794848?af=R
1•maxrmk•3m ago•0 comments

Use /loop to run Claude Code on a Schedule

https://code.claude.com/docs/en/scheduled-tasks
1•thomascountz•3m ago•0 comments

AI agents are coming for government. How one big city is letting them in

https://www.fastcompany.com/91504876/boston-cio-santi-garces-on-ai-agents-mcp-open-data
1•johnshades•4m ago•0 comments

The Government Told Courts It Could Easily Refund Tariffs. Now It Says It Can't

https://www.techdirt.com/2026/03/09/the-government-told-courts-it-could-easily-refund-unlawful-ta...
4•cdrnsf•4m ago•0 comments

How to Track Competitor Pricing Changes Automatically

https://adversa.io/blog/track-competitor-pricing-changes/
1•robinweller•4m ago•0 comments

Canadian employment trends in the era of generative artificial intelligence

https://www150.statcan.gc.ca/n1/pub/36-28-0001/2026001/article/00003-eng.htm
1•jyunwai•4m ago•0 comments

Show HN: A daily arithmetic puzzle with a hidden Hard Mode

https://make24.app
1•kapework•7m ago•0 comments

Breaking macOS Screen Time for fun and profit

https://dunkirk.sh/blog/screentime/
1•clacker-o-matic•7m ago•2 comments

CIA faces furious backlash after hidden document with potential cure for cancer

https://www.dailymail.co.uk/sciencetech/article-15629211/cia-cancer-cure-document-declassified.html
4•Bender•8m ago•1 comments

SSH Config: The File Nobody Reads

https://vivianvoss.net/blog/ssh-config
1•alwillis•8m ago•0 comments

Show HN: Time as the 4th Dimension – What if it emerges from rotational motion?

1•lisajguo•9m ago•0 comments

The internet is being flooded with AI content. How can we tell what is human?

1•01-_-•9m ago•0 comments

Unified Attestation: open-source alternative to Google Play Integrity

https://uattest.net/
1•turrini•9m ago•0 comments

Moltbook: Bot‑Only Network Full of Prompt and Scam Posts Now Monitored

https://youscan.io/blog/moltbook-monitoring/
1•defly•10m ago•0 comments

Ultrasound-Responsive Nanoparticles for Biofilm Treatment

https://pubs.acs.org/doi/10.1021/jacsau.5c01711
1•PaulHoule•11m ago•0 comments

Show HN: Quadratic Intelligence Growth from Logarithmic Routing (QIS Protocol)

https://yonderzenith.github.io/QIS-Protocol-Website/article-architecture-diagram.html
1•chris_trevethan•11m ago•1 comments

OpenAI updates privacy policy as ads expand in ChatGPT

https://searchengineland.com/openai-updates-privacy-policy-as-ads-expand-in-chatgpt-471150
6•speckx•11m ago•0 comments

Show HN: Self-hosted Chromium engine with 256 parallel stealth sessions

https://owlbrowser.net/
1•ahstanin•12m ago•0 comments

Show HN: ChatShell – 22MB AI Agent with 9 Built-In Tools (Tauri, Not Electron)

https://github.com/chatshellapp/chatshell-desktop
1•s3anw3•13m ago•1 comments

Show HN: Marque – MCP/CLI server for persistent agent design identity

https://marque-web.vercel.app/
1•Parth_Sharma_18•13m ago•1 comments
Open in hackernews

Apple's 512GB Mac Studio vanishes, a quiet acknowledgment of the RAM shortage

https://arstechnica.com/gadgets/2026/03/apples-512gb-mac-studio-vanishes-a-quiet-acknowledgement-of-the-ram-shortage/
380•rbanffy•1d ago

Comments

znpy•1d ago
Apple recently introduced rdma support in mac os. They are probably trying to push those people buying the 512gb configuration towards buying more of the 256gb configuration and clustering them together.
PostOnce•1d ago
A consumer computer company is not going to push people towards building a miniature HPC cluster. Closest we'll ever get to that is multiple GPUs for video games.*

*Nvidia is no longer a primarily consumer company, so all the other GPU stuff is no counterpoint

MaxikCZ•1d ago
One could argue that if you are buying 512gb RAM machines you are not a typical consumer.
PostOnce•18h ago
But you're also in the tiny minority of Apple customers, because most people who need 512GB of RAM are not looking at Apple products.
znpy•1d ago
Apple isn’t really a consumer company. It does both consumer and enterprise stuff. Just look at all the fleet management stuff it does for ios and mac os.

And besides that, high end macbook prod and studios are workstation-class computers, not consumer-level computers.

tonyedgecombe•1d ago
It’s definitely a consumer company when you compare it to Microsoft.
znpy•1d ago
The comparison is completely irrelevant.
hylaride•1d ago
> A consumer computer company

Apple isn't a just a consumer computer company. Both iPhones and Macs have very large business markets. In fact, I'd argue that the primary reason Apple hasn't locked down MacOS as much as iOS is that it'd absolutely kill the demand from software developers.

citizenpaul•23h ago
The second I saw llms run on gpus i started trying to predict the last year that nvidia produces a consumer GPU product.
carefree-bob•23h ago
I am doing the reverse, and trying to predict the last year that LLMs use NVIDIA GPUs. It's just an accident of history that video game cards are useful for LLMs, and there is absolutely nothing that NVIDIA is doing from a design standpoint that the big hyperscalers can't do on their own, cutting NVIDIA out, and doing a better job of it as they know their own unique needs. The only advantage NVIDIA has is supply chain relationships and it takes time to establish those, but once that's done, we'll see all the big companies rolling their own silicon and no longer relying on NVIDIA.
storus•1d ago
Weren't 512GB models selling like hot cakes to the complete surprise of Apple? Wait time was up to 3 months last time I checked. Glad I got mine last October.
appreciatorBus•1d ago
“Like hot cakes” is relative.

> The 512GB Mac Studio was not a mass-market machine—adding that much RAM also required springing for the most expensive M3 Ultra model, which brought the system’s price to a whopping $9,499.

Number of people willing the number of people willing to spend $10,000 on a computer is pretty tiny. Maybe they are common enough in HN circles, but I doubt any one at Apple is losing sleep over them.

dangus•1d ago
Of course, $10,000 workstations for a corporation working on AI products might just be a necessary tool.

Just a guess, but I think it’s entirely possible that Apple sold through the full production run that they intended for this generation of the machine and they don’t want to order a new batch before the next generation of processors come out.

I have to think that Apple is close to replacing the M3 Ultra with an M5 Ultra or something of the sort.

storus•1d ago
A retailer told me they sold more 512GB RAM MacStudios than any other type. N=1 I know but still...
__patchbit__•1d ago
There is a $6 thousand value add service to configure your Mac Mini with AI and have that accessible over iMessage.
rattray•1d ago
Curious, what do you use it for?
jonhohle•1d ago
Probably an Electron app or two.
storus•1d ago
Huge local thinking LLMs to solve math and for general assistant-style tasks. Models like Kimi-2.5-Q3, DeepSeek-XX-Q4/Q5, Qwen-3.5-Q8, MiniMax-m2.5-Q8 etc. that bring me to Claude4/GPT5 territory without any cloud. For coding I have another machine with 3x RTX Pro 6000 (mostly Qwen subvariants) and for image/video/audio generation I have 2x DGX Sparks from ASUS.
ganoushoreilly•1d ago
We must be twins, i've got the same three working in a cluster.

I was really excited to see where the GB300 Desktops end up, with 768gb ram but now that data is leaking / popping up (dell appears to only be 496gb), we may be in the 60-100k range and that's well out of my comfort zone.

If Apple came out with a 768gb Studio at 15k i'd bite in a heart beat.

https://www.dell.com/en-us/lp/dell-pro-max-nvidia-ai-dev

storus•1d ago
Yeah, I didn't want to spend more than 50k for local inference stack. I can amortize it in my taxes so it's not a big deal but beyond it would start eating into my other allocations. I might still get M5 Ultra if it pops up and benchmarks look good, possibly selling M3 Ultra.
irusensei•1d ago
Any custom configuration takes a while for them to prepare. I remember my M3 max took 2 or 3 months to a arrive.

Good thing is they only seem to charge when device ships so if an M5 comes along you should be able to cancel the m3 ultra and get the m5.

pera•1d ago
I was just looking to buy a Raspberry Pi 5: the 8GB one is now 58% more expensive than last year; that's more than what I'm willing to pay.
abnercoimbre•1d ago
What will you do? eBay?
pera•1d ago
Maybe if I can find an unused one, still not sure tbh... Or I might go for an alternative SBC from AliExpress and compromise on CPU
pjmlp•1d ago
Old Android phones might also be an option, depending on the use case.
giancarlostoro•1d ago
I cant personally comment on them since I havent grabbed them yet but these are two Pi clones I was considering:

Radxa Rock 5C and Orange Pi 5

I would do research on them because they are a similar form factor and usually cheaper for more memory… the software will be different.

benbojangles•1d ago
i'm running qwen3.5:0.8b on my orangepi zero 2w, low token/s but it still runs. I think i paid around £14 for it over a year ago but now the same board is double price. I wouldn't buy a computer right now of any kind. It's a bubble.
giancarlostoro•1h ago
Interesting, I have a few Pis laying around, I know they'd be low token use, but debated putting some models on them, whats your setup look like if you don't mind me asking? Is there a specific image or package you're using?
gzread•1d ago
Raspberry Pi hasn't been a cheap SBC for a long time. It's now in the same market segment as a NUC, but without the case and with worse price to performance.
pjmlp•9h ago
Unfortunely there are no ARM NUCs with a good distribution support, and that is still a strong value for getting a Raspberry Pi.
esskay•1d ago
Unless you specifically need a pi (unlikely) then they really are awful value now. Hard to really go out of the way to support them now they've stuck two fingers up at the solo/indie/educational community and gone all enterprise.

Second hand mini pc's are a good option. Half the price of a pi 5 + sd + power and you often get them with 16gb ram, a decent ssd, etc.

If you need GPIO then many of the rockchip boards are still fairly affordable and easily had.

mort96•1d ago
The Pi isn't great value, but honestly, I'm finding it hard to find a better trade-off between price, performance and software support right now than the compute modules for embedded projects where you can afford to spin a custom PCB. Especially for low-ish volume or prototype stuff.
teaearlgraycold•22h ago
I also love the compute modules for their size. Stick one on a nano base board and they’re half the size of a Pi 5. TBH the standard Pis are a bit frustrating with all of the IO. I do not believe the average purchaser is using one as a PC replacement and wants 4 USB ports and 2 HDMI ports. I’ve never seen one in use like that. They are mostly servers or driving a single display without any user input.
mort96•18h ago
100% with you on the IO. I've never even wanted two display output ports with any raspberry pi.

You know what I do want though? An actual damn HDMI port! HDMI cables are everywhere, wherever I am I have unlimited options to connect an HDMI device to some kind of screen. But micro HDMI? The literal only thing in my life that uses it is the Raspberry Pi 4 and 5. There have been plenty of times where I've reached for a Pi 3b instead of a 4 or 5 just because I didn't have a micro HDMI cable.

I do not understand what has gone through their head. How could anyone look at the use case for a Raspberry Pi and decide that two micro HDMI ports is a better choice than one HDMI port? I don't understand it. Like you, my experience with the Pi is that they mostly just sit there, headless, so the only reason I need display output is that it's useful during setup (because they don't have a proper serial console port).

I can't set up a Pi 4 or 5 without going hunting for that micro HDMI cable I bought specifically for that purpose and never use for anything else. I can set up a Pi 3b anywhere, at any time.

esskay•8h ago
The micro hdmi thing (which I too loath) is for digital signage and industrial machinery - we (home users) aren't the audience and haven't been for a long time.

Being able to run two sides of an advertising board, or two control panel screens on a big hunk of metal doing fabrication things in a factory was more important to Raspberry Pi as a business apparently.

Why he heck they didn't just go with 1x normal hdmi and 1x usb-c +DP for the Pi 5 is a mystery, perhaps the SOC doesn't support it or something.

dwedge•1d ago
Don't forget the case and fan. I think the RPi 3 was the last one you could comfortably run without a fan and not worry about it frying the SD card
mort96•1d ago
Completely depends on what you're doing. If you're doing a lot of sustained compute, or doing graphics, then yeah you're gonna want some cooling. But it's a useful little machine for all kinds of tasks which don't cause sustained high power consumption.
dwedge•22h ago
Two fried on me. One was just running a printserver without a case. It was in summer so ambient temperature was around 32C but still, you telling me you use rpi 5 without even a cooling case?
WillAdams•1d ago
I've had an rPi4 running a copy of a forum and server (for reference) in one of the fancy aluminum cases which passively cools for a couple of years now, no issues.
esskay•22h ago
The big chunky aluminum ones do seem pretty good on the pi 4. I had one in the flirc case for a long time and it never seemed to have issues. Obviously adds to the cost though. Also not sure if the Pi 5 works as well in them given its higher thermals, and the Pi 4 didn't exactly run cool so imagine the 5 might throttle occasionally without active cooling.
dwedge•22h ago
Yeah I should have been more specific, a fan isn't the only option but you need either a fan or a cooling case. Running them naked is too risky now
ConfuSomu•11h ago
I have been using a Pi 4 as a desktop computer for a few years (didn't have anything else) with an microSD card and without any fan, heatsink or case. Haven't had anything problems. Obviously, this depends on your environment, but it worked fine for me.
dwedge•1d ago
The raspberry pis have been bad value for money for at least 4 or 5 years now unless you're really sensitive to the power draw. Once you add in a case and fan (required if you don't want it to overheat and screw the SD card), the charger, SD card it generally comes in at roughly the same price as more capable intel 1L PC like the Lenovo M920Q (though of course, they aren't new)
pera•1d ago
Yeah power consumption (and performance per watt) is the main reason I keep buying Raspberry Pi, I haven't find anything similar on that regard, specially for pi zeros
irusensei•1d ago
It's also very hard to encounter x86 machines that can be powered from PoE.
codealchemy•1d ago
True for base PoE (802.3af, 15.4W), but if you have PoE+ or greater (802.3at, 30W and up) you can start to power more common PCs - I’m running a couple repurposed Chromeboxes from PoE++ adapters.
zrobotics•23h ago
I'm on mobile so can't easily pull up an example part number, but digital signage controllers can often be PoE powered. They're insanely overpriced new from the actual suppliers, but for hobby projects they can normally be sourced relatively easily on ebay. The trick is that many of the ebay sellers don't bother listing the specs, so you need to first search digital sign cintroller/computer on ebay then look up the spec sheet from the model number.
dwedge•22h ago
Another annoyance is that you need to buy a hat for PoE, it seems like an oversight
teaearlgraycold•22h ago
I really like the ecosystem around them. All of the nice compact hats, the software, the 3D print files. Very googleable which also means easy to get help from an LLM.
shadowpho•14h ago
I thought x64 was better for perf per watt just because perf is so much better
neya•1d ago
There were a couple alternatives for a few years. Wonder if they are a better value for money now. Beaglebone Black, Orange Pi, Jetson Orin Nano, etc.
irusensei•1d ago
Geez you just agro the x86 guys.

Have fun reading 40 answers about how discarded Lenovos from 2017 are cheaper and stay idle at 5W. It springs to 3x the power usage of a pi if they do anything with it but who cares about performance per Watt?

ErneX•1d ago
> Pricing for the 256GB configuration has also increased, from $1,600 to $2,000
cobertos•1d ago
Is that a typo in the article? It's $5999 on Apple's website for that configuration
tekacs•1d ago
I think this means cost above. As in the extra cost you pay.
ErneX•1d ago
It’s what toggling the 256GB upgrade costs from the previous ram amount, not the computer total.
mkl•1d ago
I'm trying to work out if I should buy a 48GB M4 Pro Mac Mini now, or wait for M5 Pro ones later this year. For AI/ML purposes, mostly. As far as I can tell, the new M5 MacBooks didn't go up much or any for the same amount of RAM?
deelowe•1d ago
I don't expect the ram situation to get any better soon if that's what you're asking.
discordance•1d ago
Depends what kind of AI/ML purposes you are intending to work on
mikkupikku•1d ago
At this point is the performance advantage of Apple CPUs even worth it if you can't upgrade the ram itself? I'm thinking you might be better off building a PC and putting the absolute bare minimum RAM in it, with plans to swap that out with good stuff in a year or two once the RAM market stops being insane.
htsh•1d ago
are we sure the RAM market will stop being insane in a year or two or could this be the new norm?
onli•1d ago
Why should it be the new norm? We have an abnormal situation now, of massive amounts of investor money being poured into unprofitable bets, that this time had the side effect of eating up hardware components. There are two possible outcomes:

1. Yes, it's the new normal, then production capacity will be increased and prices fall.

2. No, it's not the new normal, the bubble pops and component prices come crashing down when buyers default etc.

Option 2 has been the normal outcome of these situations so far. But sure, questions remains how long all of this will take.

lukan•1d ago
Option 3: the global wars increase and continue to be the new normal with shipping routes disturbed until the climax, china annexes Taiwan.

In that case prices will continue to rise (among other things).

mikkupikku•1d ago
I don't know if it'll be a year or two, hard to say exactly when the AI bubble will pop, but I feel quite certain it's coming. The AI stuff is great but most of the money being thrown around to all these different companies is mostly going to be wasted. Investors don't know who the winners and losers will be, just like when people were investing in pets.com instead of amazon.com.
gzread•1d ago
Every ten years the RAM cartel raises prices (it's not really about AI, see Gamer's Nexus) and every ten years it is forced to lower them again.
mkl•1d ago
It's the RAM bandwidth that's the advantage, 300GB/s for M5 Pro. RAM in slots is way slower, ~50GB/s.
jsheard•1d ago
But for ML workloads the comparison isn't between slotted CPU RAM and Apple's unified RAM, it's between Apple's unified RAM and dedicated GPU VRAM, which can more than double even the M3 Ultras bandwidth at up to 1.8TB/sec. Apple Silicon makes a unique set of trade-offs that shine in certain areas but they are still trade-offs nonetheless, so it really depends on what exactly you're doing with the hardware.
zozbot234•1d ago
Dedicated GPU VRAM is much scarcer than the unified RAM you get on Mac platforms. This is a big deal for SOTA LLMs that combine high memory footprint with a need for high memory bandwidth in order to get acceptable performance.
pjmlp•9h ago
RAM, disk, CPU, GPU, for me it isn't for quite some time, then again I have been mostly a Windows/UNX person, only using Apple gear when assigned via project delivery.
bombcar•1d ago
I wouldn't buy a local machine for AI/ML purposes unless you have an actual defined use case and programs to run (perhaps even being able to test them at an Apple Store).

Otherwise you may end up like others using a high-spec Mac mini to just access online models.

quietsegfault•1d ago
I do not appreciate you calling me out personally!
classified•1d ago
Which one are you, Mac or Mini?
itg•1d ago
The AMD Strix Halo pc's are another option. I was also debating between a mac mini but decided to go the AMD route.
freedomben•1d ago
I'm eyeing that seriously too. Are you running linux on it by chance? Would love to hear from someone running linux on a non-Apple AI capable machine
bombcar•1d ago
They may be trying to sell through the existing CPU before a launch (soft or not) of the M5-based versions (though I've heard the rumor is there will be no M5 Ultra and we might be looking at an M6 Ultra later in the year).
reactordev•1d ago
This is most likely the case. They ended their production run and have inventory (or so they thought). Now with the rush for LLM power, they sold out of them and they no longer have that inventory. This was a surprise to their bottom line AND their supply chain logistics plan!

I’m sure they wanted to order more but were priced out for the increase in ram costs. Apple probably decided it wasn’t worth it until they revamped the architecture (and put a larger order in this time around).

I’m not a buyer but I suspect that’s what’s playing out right now behind closed door meetings.

Terretta•1d ago
Regret on a $10k desktop rendered obsolete for purpose (the 512GB of RAM only has so many applications) months later is not a great look. It's good long-term brand value thinking to close the regrets window earlier.

Definitely “Caution” stage: https://www.macrumors.com/roundup/mac-studio/

pstuart•1d ago
Perhaps not? Think of all the Chrome tabs you could keep open at one time!
bombcar•23h ago
512 GB of RAM? Could probably have five tabs and two electron apps at the same time!
rbanffy•7h ago
“Rendered obsolete” is a doing a lot here. It might have been discontinued, but it is still faster than the rest of the line and the only self-contained computer that can handle models that large.

The most I would say is that it was discontinued, but, depending on how it goes, it might be just sold out for now pending on memory procurement.

jmull•1d ago
Interestingly, the "ultra" Mac Studio released a year ago was based on the older M3, not M4. Apparently, the work to "ultra-fy" a CPU is significant (which makes sense) so there can be a lag.

Not that they have to follow pattern, but the a Mac Studio ultra released later this year might be based on M4. Or one based on M5 might be released a year or more from now.

groundzeros2015•1d ago
My understanding is “ultra-ify” means put two together. I think it’s about having inventory

https://www.apple.com/newsroom/2022/03/apple-unveils-m1-ultr...

zozbot234•1d ago
It used to mean that, but the new M5 Pro and M5 Max have separate CPU and GPU chiplets with an interposer, similar to how the previous generation Ultras were based on connecting two Max full dies. So it's unclear whether there will be any Ultra for the M5.
avidphantasm•1d ago
And here I was hoping they would put an M5 Ultra in a MacBook Pro. Maybe they will add it as an option to the 16” at a later date.
bombcar•1d ago
I wonder if the MacBook Pro can handle the thermal envelope of actually running an Ultra flat out.
wtallis•20h ago
There are gaming laptops that come with power bricks rated for higher output than a Mac Studio's power supply. M3 Ultra levels of power dissipation are possible to handle in a laptop, but it wouldn't look much like a MacBook Pro. That kind of gaming laptop typically has four fans (compared to two on a MacBook Pro), and large vents on the sides, bottom, and back of the machine allowing them to move a lot more air through the system.
bombcar•15h ago
Post-Jobs Apple is willing to do some things he’d not allow but I can’t see them selling such a beast.
pram•1d ago
This is never happening, the Ultra needs a giant copper heatsink.
rbanffy•7h ago
Ditch the Aluminium and go with a copper MacBook Pro. Or silver. If you get it with a terabyte of RAM, the silver shell will be a small part of the total costs.

Argentium 960 would most likely be the best alloy for the job, as it’s a good heat conductor and doesn’t tarnish like pure silver.

rbanffy•7h ago
This tells me the Max CPU chiplet has two interfaces to GPU dies. If you can connect two CPU chiplets via the same interface, making an M5 Ultra is doable by joining two CPU chiplets, each with a GPU chiplet attached.
kube-system•1d ago
The existing ultras are two max dies connected together with TSMC’s CoWoS-S interposer. But as I understand the interposer can have yield issues, so yes — you put two together, but it’s not quite as easy as snapping together legos.
foobiekr•1d ago
It's really probably more about wafer and chip allocation.
fartfeatures•1d ago
The M4 Max lacks the UltraFusion interconnect, making an M4 Ultra impossible. We might however see an M5 Ultra due to the new Fusion Architecture in the M5 Pro and M5 Max chips (just announced for the latest MacBook Pro), which uses a high-bandwidth die-to-die interconnect to bond two dies into a single unified SoC—similar in concept to UltraFusion but evolved for better scaling, efficiency, and features like per-GPU-core Neural Accelerators.

Reports and leaks strongly indicate Apple is preparing an M5 Ultra (likely fusing or scaling from the M5 Max using this advanced interconnect tech) for a Mac Studio refresh later in 2026, based on Bloomberg/Mark Gurman and other sources. This would bring back the top-tier "Ultra" option after skipping it entirely for M4.

JumpCrisscross•23h ago
> M4 Max lacks the UltraFusion interconnect

Any idea why? Wasn't that on the M1?

bombcar•22h ago
I suspect that the cost/benefit isn't there. Those who need the "biggest Ultra" will be happy with the previous generation or so, and so they'll refresh that on a 2 or 3 year cycle.
rbanffy•7h ago
Given that generation gains are not sufficient to make a Max twice as fast as the previous-gen Ultra, a longer cycle is rational. The M3 Ultra is still the fastest M-series system.
hexyl_C_gut•39m ago
M5 Max outperforms M3 Ultra.

M3 Ultra, 3.3k single core 27k multicore on Geekbench. https://browser.geekbench.com/v6/cpu/16959045

M5 Max, 4.3k single core 29k multicore https://browser.geekbench.com/v6/cpu/16956481

caseyf7•22h ago
There will absolutely be an M5 Ultra. Gurman has confirmed it.
halJordan•18h ago
That means little and less
cyanydeez•1d ago
Just got a strix halo ROG Z13 this month with soldered UMA memory, 128GB LPDDR5X-8000. It cost ~$3k.

Amazon is selling 128GB memory kits @ 5600MHZ for $3k.

I think there might be a market failure guys.

tonyedgecombe•1d ago
It’s only a failure if manufacturers don’t respond by increasing capacity.
cyanydeez•1d ago
This is like believing there's unlimited, instant-on capacity. The same type of "we can just tariff whatever we want, and magically, the market will figure it out".

That makes sense for a few products, but not something that takes billions of dollars, multiple factories, etc to produce.

tonyedgecombe•1d ago
That is true. Apart from all the other times demand for memory has exceeded supply.

You can't compare it to tariffs because the cheaper alternative to investment is to bribe your politicians.

Aurornis•1d ago
> Amazon is selling 128GB memory kits @ 5600MHZ for $3k.

128GB memory kits are not $3K. Closer to half of that. Amazon is not a good source of RAM pricing.

freedomben•1d ago
Where would you recommend sourcing RAM?
Aurornis•1d ago
Newegg has 128GB kits from quality vendors at $1500: https://www.newegg.com/crucial-pro-128gb-ddr5-5600-cas-laten...
chiph•1d ago
I think it's unlikely that Apple is paying the spot price for memory. They almost certainly negotiate delivery/price contracts in advance. Maybe the contract for the chips used in the 512GB model will expire soon?
ajross•1d ago
Even if so, everyone lives in the same market. If Apple has a contract for those chips at an artificially low price, it's to their advantage to sell them to someone else at market value instead of putting it in a Mac where they'd have to increase price (and take the PR hit) significantly to make the same profit.
tonyedgecombe•1d ago
Not if their margin on the completed product is higher than the potential profit on the memory.

My guess is they are doing this because they make more money selling two 256GB devices than they do on one 512GB device.

don_neufeld•1d ago
Or they believe the long term value of two customers is bigger than one.

It can be about more than the single sale.

Faaak•1d ago
That doesn't take into account the profit generated by selling the mac in itself
thfuran•1d ago
Or the fact that if they sell all their RAM without putting it in devices, they won’t be able to sell devices, and some portion of their customer base will leave their ecosystem, possibly forever.
ajross•1d ago
The story is literally about them cancelling a product variant...
thfuran•1d ago
And you think this is the first sign that they’ve decided they’re going to spend the next few years being a RAM reseller before starting to sell consumer products again?
ajross•23h ago
No, but "shipping less RAM" is clearly on that spectrum. The point wasn't about literal product strategy, it's that there's a limit to what actions are financially feasible and it's set by "what else could you do with that junk?"
zitterbewegung•1d ago
Yea, wwdc is happening soon and the M5 ultra is in production . The new pricing will be respective on the highest config (768 gb is rumored) though.
storus•1d ago
M5 Max has still only 128GB RAM at most; one would expect 192GB if there was any indication M5 Ultra would have 768GB RAM?
rz2k•1d ago
The maximum memory configuration for the M3 Max MBP was also128GB.
storus•1d ago
That's my whole point. M3 Max 128GB -> M3 Ultra 512GB. M5 Max 128GB -> M5 Ultra 512GB. But if M5 Max 192GB -> M5 Ultra 768GB, i.e. Ultra having 4x the memory of Max.
zitterbewegung•1d ago
It is a rumor at this time and we are going from M3 to M5 on the Ultra not the M4 to the M5
muyuu•1d ago
it's Apple and they don't like to adjust prices to the market

other companies would have just hiked the price of the 512GB model to reflect the lack of supply and to allow people who really need that model to pay for it dearly

but that comes with some PR damage that Apple would rather not deal with

rootusrootus•1d ago
But they did raise the price of the 256GB model.
muyuu•1d ago
Yep, but it they had to double or triple it on short notice, they'd have just removed it from the store instead, and I imagine that the RAM is going into 256GB systems for more $$$ but still nothing really that alarming for the consumer.
Thorrez•1d ago
>Apple buys and uses so much RAM across all its product lines that it’s in a better negotiating position than the likes of Framework or Raspberry Pi, but CEO Tim Cook acknowledged in the company’s last earnings call that memory pricing could begin to eat into Apple’s profit margins later this year.
cyanydeez•1d ago
I would think the price gouging on memory tiers is why its in a better negotiating position. Having 200% markup means minor market conditions wont prevent them from payment.
AnthonyMouse•1d ago
There's also the fact that they were charging $200 to add 8GB of RAM before the prices went up, when that much RAM was something like $70 at retail.

The problem then is that when the supply gets more expensive and you were already charging the maximally-extractive price to customers, they can't eat much more of a price increase, so instead most of it has to come out of margins.

rafaelmn•1d ago
I was configuring my M5 MBP preorder and 48=>64 was 250 EUR so not sure if they cut prices or your numbers are outdated ?
Detrytus•1d ago
Prices in the US are usually much lower than in Europe. I just checked and 48->64 ram bump is still $200

I just did a 14" MBP with M5 Max, 128GB RAM, 4TB SSD, nano-texture display. Price difference is $5849 vs 7004 EUR ($8136).

rafaelmn•23h ago
I'd say half of that difference is that we have VAT included in price.

But my point is that's a 16GB jump for 200$ not 8GB

bombcar•22h ago
US prices are often low enough that it's almost worth the flight just to grab one.

14' MBP M5 Pro 64GB - $2999 or 3449 €

rafaelmn•19h ago
That's not US prices, it's just price without VAT vs with VAT included. US also has sales tax it's just not included in list prices.
SirMaster•3h ago
Well but some states don't have a sales tax.
asdff•23h ago
Actually that is relatively cheaper than Apple has ever sold ram. They would always charge $200 for each ram upgrade and it might have been only 4gb or less back then.

The twist now though is they started soldering in the RAM with the retina macbook, so you can't run around apple's extortionate pricing like you could in the past and just buy components off the market.

Such a stupid cartoon evil villain move too, just to force us into getting RAM from them. I have never been memory bandwidth bound (Apple's excuse for soldering in the RAM) in my life and yet I am forced to buy computers that optimize for this at the expense of things I actually care about like serviceability. And also consider the fact it incentivizes people to buy more RAM than they need today in effort to future proof their device, in a time of RAM shortages. And who knows maybe by the time that RAM amount is relevant the CPU can no longer keep up so the hoarding might not even be for anything either.

AnthonyMouse•15h ago
> I have never been memory bandwidth bound (Apple's excuse for soldering in the RAM)

This isn't even a plausible excuse. For the entry level machines, the soldered RAM only has the same memory bandwidth as ordinary laptops. For the high end machines it likewise doesn't have any more than other high end machines (Threadripper/Epyc/Xeon) which just do the same thing as Apple -- use more memory channels -- without soldering the RAM.

And it's especially a kick in the teeth right now because it means you can't buy a machine with less RAM than you might prefer and then upgrade it later if prices come back down. If it's soldered then only what you can afford at the right now prices is all the machine will ever have.

pwarner•23h ago
I think part of what's happening lately is that chip folks are start to realize they can make margin too. Maybe it's possible thanks to consolidation but for sure folks see the crazy margins nvida, apple etc have, and I suspect they're like - we want that too!
phamilton•1d ago
15 years ago I was an intern at Micron and learned they passed on a contract with Apple because Apple insisted on discounts and there wasn't a compelling reason to reduce profit at Micron.

So yeah, Apple probably does pay less. But the market has enough demand that suppliers do say no.

zozbot234•1d ago
This is actually relevant, because DRAM costs just as much now per Gb as it did 15 years ago (that's controlling for inflation; it's as much as it cost 20 years ago on a pure price basis).
mapt•1d ago
It costs the same, we just mark it as an opportunity cost of unloading the memory on the spot market.

If I buy contracts for 1 gold bar at $500, and the gold price runs to $1200, I can either continue to market my gold-containing product for the same profit margin, or I can unload all that gold for $1200/bar and make a profit of $700/bar. If my profit margin is high and it doesn't take many gold bars to make a thousand units, maybe discontinuation doesn't make any sense. But if my product is "solid gold statuary of Dear Leader", and the bars are most of my cost basis, I know what I'd do.

groundzeros2015•1d ago
You’re thinking only finance. Their goal in buying the contract is to secure the good. The ability to maintain price will allow them to sell more units which is the number they want to show.
bombcar•22h ago
More importantly it'll probably get some people to switch, and potentially they have a customer for life now.
kube-system•1d ago
Yes but the clock has been ticking, new products are being released, and at some point they will be renegotiating the next contract
don_neufeld•1d ago
Yup, and it looks like it will be a tough negotiation:

https://www.macrumors.com/2026/02/26/apple-agrees-100-price-...

boznz•23h ago
Apple needs to seriously consider some sort of vertical integration on memory, it has proved it can do it with CPU's
glenstein•19h ago
Amazing that it's gotten to that point but I think that's right. It's more intuitive that you would need vertical integration with your processing chips because of the degree of expert specialization necessary to produce them, especially in close coordination with a major product release.

By comparison, ram seems much more a commodity, but the game has changed and it seems like there may be an important strategic interest in sourcing and supplying your own.

Culonavirus•1d ago
It's not a shortage, it's a cartel.

https://www.youtube.com/watch?v=jVzeHTlWIDY

Alifatisk•1d ago
The video is 1 and half hour long. It's a whole documentary. Very detailed and well thought out, but too long for me at the moment. I'll see if its possible to get a summary somehow.
ahurmazda•1d ago
aside: YT has a AI/summary option (unless creator opts out). Look for the sparkle button. Personally, it gets me 80% there most days.
Alifatisk•1d ago
Couldn't find the summary button anywhere, but, when searching around I found out that you can apparently paste in Youtube links on Google AI studio and summarize it.
HPsquared•1d ago
That sounds pretty ironic, given the topic.
Imustaskforhelp•1d ago
I haven't watched up the video but I went way too into the weeds of the ram crisis.

I am not sure what the video suggests. This is my own understanding of the things after I got way too invested in why does OpenAI need all of this ram all of a sudden. (On a random tuesday)

My understanding is TLDR: The stargate project had OpenAI,Oracle,Softbank etc.

Softbank got the money from Japanese bank loan[0] at low interests rates and actually scrambled to find the 20 Billion $ (they commited combined with oracle to around 500 billion $)

(Btw The datacenter thing is being done in a similar fashion by Oracle)

Almost all of that money when given to OpenAI was used/(will be used?) to commit 20% of the Ram supply of the whole world at a more expensive package because these companies just package ram in different order to get "AI ram" and then Micron shuts down the consumer brand (Crucial)

This has now caused Ram prices to spike 5 times the cost in a couple of months back. Also, the inflation is happening in hard drive and just Nand in general.

The largest impacts I can see that is that even companies like google were scrambling to find Ram. I find this to be one of the larger reasons why they might need so much ram all of a sudden. I mean Google and Anthropic were needing Ram but not 20% of it and not committed in such a way and I am not sure if datacenters are even being built for ram to be stored[1]

OpenAI datacenters in Argentina for example is operated by such a shady company that came like 1-2 years ago IIRC. So a 500 Billion $ Project is just picking any random companies ... Yea no, I have the belief that they don't trust it themselves especially when a company is scrambling for money.

All of this does feel very cartel/monopoly-ish to me to push the competitors out of the market or the people running open source models out of the market and another benefit of it for OpenAI all was that we normal everyday people get impacted too and I am sure that when they made such a large decision, they must have internally thought about it but we all know the morality of OpenAI now after the DoD deal.

But I don't think that google and other companies are that impacted by it all it seems as well. Only the average consumer and Hosting providers (Thus seeing OVH,Hetzner raise prices for example). The average AWS/GCP/Azure makes enough money that they might not even raise money for sometime and they'll be fine having another additional benefit that more people worried about increasing prices would go to Microsoft Azure/GCP/AWS even more so.

Edit: Gamers are being pushed out of consoles and everything too and some are saying seeing the cloud connection and AWS coming out and saying that we want Gamers on cloud (paraphrasing) as meaning that its all done to move everything to cloud.

I do believe that this might be only half the story as OpenAI does benefit from everything moving to the cloud (somewhat) but its done even more to prevent competition in the whole genre as well.

I believe that they thought about it and treated it as a plus point but before all and everything, it helped them thought that it can help them maintain their flimsy lead in AI models as more and more catch up by having a more monopolistic lead by stifling competition by rising prices 5 times. Gamers and normal people were just the largest casuality in this crossfire.

I was thinking in the past month when I found all this that damn, OpenAI's morality sucks and they did all of it on purpose

And then they had the department of defence* deal and the whole controversy surrounding it so yeah, that too.

OpenAI doesn't want your benefit. It wants its profit and when these are conflict, OpenAI doesn't care a cent about you, not anymore than the cent that you give it.

[0]: https://www.bloomberg.com/news/articles/2026-03-06/softbank-...

[1]: https://www.shacknews.com/article/148208/oracle-openai-texas...

Alifatisk•1d ago
Incredible digging. I remember reading comments about the reason the price hike was the Sam Altman secured a deal with the few ram producer in secrecy were they promised to reserve a large portion of their production to OpenAI for the next years (I don't remember how long). Supposedly Sam will just to put them in a warehouse to collect dust.
Imustaskforhelp•1d ago
> Incredible digging

Thanks. I appreciate your kind words, I was thinking of writing some piece/blog about it but procastination is definitely something :) But I am just happy that I finally wrote a comment atleast explaining all/most of my understanding. That's more than fine for me.

> Incredible digging. I remember reading comments about the reason the price hike was the Sam Altman secured a deal with the few ram producer in secrecy were they promised to reserve a large portion of their production to OpenAI for the next years (I don't remember how long). Supposedly Sam will just to put them in a warehouse to collect dust.

I do believe that's gonna be the case as well. Most of the ram is probably not needed currently (thats what I feel like) so its gonna sit on dust, That, or oracle/microsoft will use it within their datacenters as old ram breaks apart to have some more monopoly given their close ties to OpenAI.

Even if OpenAI internally sells them at half the market price to microsoft/oracle, they still technically turn a profit.

I actually felt too conspiratorial thinking about it when I had first discovered it because I was under the previous assumption that OpenAI actually needed the ram myself too. But seeing recent events of OpenAI with Department of Defense, I definitely think that they did this on purpose.

r_lee•1d ago
I would say, just post these kind of rants on a substance and X or something, don't LLM format it, just kind of lay it out and fix whatever typos and let loose

it'd be interesting to just hear some thoughts and opinions from someone who has done some research on the topic in a light way vs a huge article/documentary

gruez•1d ago
>Almost all of that money when given to OpenAI was used/(will be used?) to commit 20% of the Ram supply of the whole world at a more expensive package because these companies just package ram in different order to get "AI ram" and then Micron shuts down the consumer brand (Crucial)

>[...]

>All of this does feel very cartel/monopoly-ish to me to push the competitors out of the market or the people running open source models out of the market and another benefit of it for OpenAI

Nothing you described is actually "cartel/monopoly-ish" beyond "big players have more money to splash around". It's fine to go look at that and go "grr, I hate big tech companies", but the claim of "It's not a shortage, it's a cartel." isn't substantiated. The latter implies some sort of malice beyond what could be explained by standard scarcity thinking, eg. "there isn't enough RAM to go around. We need RAM, so let's stock up".

Imustaskforhelp•22h ago
My point suggests that there is enough Ram to go around in an ideal world even with LLM's but its rather that stocking up Ram could give you so much benefit over your enemies within this space and leverage that you have no reason not to.

So it isn't there isn't enough ram to go around (period) but rather an ideology similar to this town ain't big enough for the two of us (OpenAI vs Anthropic/Google/Chinese-Open-Weights-Models)

Atleast that's my understanding of the situation and I can be wrong about it too for what its worth.

harias•1d ago
You can use Gemini or NotebookLM to summarize it
sva_•1d ago
I haven't watched the whole thing either, but basically

https://en.wikipedia.org/wiki/DRAM_price_fixing_scandal

+ showing the people responsible have only been promoted in those companies

+ pointing out 3 (Micron, Samsung, SK Hynix) companies at the heart of it now have 95% of market share

+ hypothesis they've doing it again (or rather, have continued doing it "business as usual")

cubefox•1d ago
No, it's a shortage due to high demand from AI data centers.
amelius•1d ago
Why are you so sure?
gruez•1d ago
You could ask the same about OP.
amelius•1d ago
At least that video contains a bunch of argumentation.
cubefox•1d ago
Inference to the best explanation.
groundzeros2015•1d ago
I hope so. If the government isn’t protecting them all we need to do is wait for new participants in the market.
snczl•20h ago
The story here is about what lesson was learned by the DRAM cartel after they got busted and hit with large fines. One might hope the lesson learned would be, "we should not fix prices", but what got them in trouble was colluding secretly. What if we just did it via earnings reports, press releases, and other public statements?

While there is some market variance like the 2022 to 2023 glut, DRAM prices haven't fallen in real terms in over 15 years. This was all done by controlling supply, and it was all done in public. It starts with one of the big three putting out a statement like, "Samsung is considering reducing DRAM wafer output due to softness in the mobile PC segment." The actual reason varies and often makes little sense.

This is followed by similar public statements from the other large vendors expressing a willingness to reduce supply. Once everyone commits in this way, the companies follow up with announcements of actual supply reductions. You can watch this happen any time prices start to dip.

My bet is if the DOJ investigates, they will not find the same sort of embarrassing smoking gun emails between representatives of Micron, Hynix, and Samsung. The collusion was all done in public. The companies will claim it is just good business management, a strategy known as "conscious parallelism." They used this exact defense to get a 2022 antitrust lawsuit dismissed.

That said, their goal seemed to be just keeping prices fixed. They wanted to avoid boom and bust cycles, keep profits high, and keep prices stagnant. A massive price hike invites investigations and creates problems. If DRAM prices just never fall, they can enjoy healthy profits with little risk.

But what happens when your intentionally constrained supply hits a sudden large spike in demand? Prices skyrocket, everyone gets mad, and demands investigations. My guess is instead of being thrilled with the price spike, the executives at the large DRAM manufacturers are very worried someone put something incriminating in a document somewhere that can be subpoenaed ("how we're going to fix prices in public and get away with it").

zozbot234•19h ago
Publicly announcing reductions in DRAM wafer output is not per se nefarious. You need to do it from time to time anyway, if only as part of retooling towards newer technologies that will be required when making DRAM dies for newer standards.
lenerdenator•1d ago
Now how am I supposed to develop Electron apps and use Chrome?

In all seriousness, though, as one of the uninitiated, what would be the value of hosting LLMs on a machine like this that has a lot of memory that you pay for up front versus some sort of VPC-based approach?

groundzeros2015•1d ago
Have you seen how much GPUs cost to rent? Note that this memory is shared between CPU and GPU.
robotresearcher•1d ago
One factor is that you may not be in a position to push your working data to a third party service, for security or legal reasons.
saurik•1d ago
So, the question I wonder: is that it for this tier? Will we even see a 512GB variant of the next model?
ganoushoreilly•1d ago
I suspect they'll still want to offer it given the push they've been making over the last year with RDMA. My guess is the 512gb or larger studio will largely be a byproduct of the systems they're designing for their own AI efforts in the datacenter. I don't think this is the end of it for the longer term.
benbojangles•1d ago
You can keep your 512gb mac studio i'm running qwen3.5:0.8b on an Orangepi zero 2w and learning just as much as they are.
zitterbewegung•1d ago
The Mac Studio highest config was a great value for AI workloads though at least for inference and no one is reporting this….
knollimar•1d ago
Don't you need two 512GB ones for unquanted latest chinese models?
redman25•1d ago
For consumers, there's little reason to run unquanted, especially for large models which take less of a hit from quantization. I'm running a 200b model at Q3 with very little degradation. A 1000b model would see even less change.
zitterbewegung•1d ago
Getting 512 GB of ram at the price point is cheaper than everything else. That’s why Apple stopped production to divert for the M5 ultra.
hu3•1d ago
Yes and the result of this $10k endeavour is a much slower a dumber model than any SoTA $20/mo API. On top of the maintenance burden to keep software/models updated.
marcuskaz•1d ago
Inventory is tight too, if you look at delivery/shipping times for Mac Studio and Mac Mini, I'm seeing April/May
kmfrk•1d ago
A lot of people also got into buying Macs for OpenClaw, so demand is probably up as well.
copperx•1d ago
What's the deal with the Mac Mini and Openclaw? a vps is a better alternative.

is it because iMessage?

andai•1d ago
You do actually need to run it on a Mac, if (and only if!) you require integration with Mac-only software. But the main factor is probably just "all the cool kids are doing it" ;)
kube-system•1d ago
iMessage… and safari. Browsing the web from a headless vps has hurdles.
Imustaskforhelp•1d ago
> safari. Browsing the web from a headless vps has hurdles

Hooking things up to puppeteer maybe?

You can use pupeteer to then use the chromium control remote (debug?) option iirc which uses websockets underneath the hood

Then you can connect this from your pc or theoretically any Control server. Surprised to not hear much work on that front now that you mention it.

kube-system•1d ago
The CDP debugger is easily detectable client side and many websites will flag your traffic as undesirable
Imustaskforhelp•22h ago
I didn't know that, Sorry about that, but is there no way to make CDP debugger less detectable. Seems doable to me but maybe there's a catch if its not already done by somebody maybe?
cute_boi•18h ago
there is so many way to make it undetectable, but it is cat and mouse game.
teaearlgraycold•22h ago
iMessage is the only explanation I can find. Minis aren’t powerful enough for agentic models unless you’re getting a rather expensive version (I could see the MX Pro w/ 64GB working). At which point they don’t have the price appeal of the base model anymore.
andai•1d ago
Yeah on the discord you see a lot of people asking about how much RAM they need to run local models. There seems to be a lot of demand for it.
root_axis•1d ago
I am not convinced there are that many people actually buying macs for OpenClaw.
nickthegreek•1d ago
i just walked into a microcenter yesterday where not only did they have a huge stock of mac mini, they are all advertised 15% off and runs openclaw.
bombcar•22h ago
https://www.microcenter.com/search/search_results.aspx?fq=ca... it's even mentioned on each one online
Sathwickp•1d ago
Was bound to happen, it never found the market fit
hx8•1d ago
I'm sure the margin was great for the ones it sold.
api•1d ago
I’ve been in tech for a long time and have seen RAM shortages and price spikes before. This one’s fairly bad but they resolve in 1-3 years.
pixl97•1d ago
Ya, how's that working out for GPU's?
api•1d ago
It’ll take longer for two reasons.

One is that it’s a more complicated part with tougher fab requirements.

Two is that it’s not a commodity. AMD can’t make nVidia GPUs. They have to design their own. Everyone has patents and trade secrets and copyrights. Patents expire and knowledge diffuses but that adds another time lag.

AMD and Intel are fully aware of the demand and are working on it.

RAM is a commodity. Totally interchangeable standard part. Also simpler to fab, thus quicker and easier to scale up.

Oh, and I’d like to add: everyone is afraid it’s a bubble that will pop. Nobody wants a bunch of stranded capex. That has also happened before many times. So that puts brakes on it too.

groundzeros2015•1d ago
The cure for high prices is high prices.
layer8•1d ago
Two years remaining at least this time, according to current fab planning.
reenorap•1d ago
I've been in tech for ~40 years now and I've never seen anything like this. The downstream repercussions on consumer products that have no access to cheap memory is devastating and is an extinction level event for most low-cost providers of cell phones, tvs, etc.
TMWNN•23h ago
>I've been in tech for ~40 years now and I've never seen anything like this.

Then how do you not remember the DRAM shortage of the late 1980s?

reenorap•22h ago
At worse back in 1988 it impacted only PCs.

This shortage in 2026 is more consequential across the board and impacts consumer electronics as a whole and the fact it's going to last years means that many low cost manufacturers are going to close up shop because they won't be profitable.

kasabali•5h ago
I'm pretty sure there were more DRAM manufacturers back then, and spinning up a new fab probably didn't require as much know-how, capital or even time.
GeekyBear•1d ago
The rumor from Gurman is that the M5 Ultra Mac Studio ships in the first half of this year.

This may just be a sign that the M5 Ultra Mac Studio is shipping sooner rather than later, as it's common for Apple to push out ship dates for soon to be replaced products.

We do have leaked benchmarks showing that the M5 Max outperforms the M3 Ultra currently shipping in the Mac Studio, so buying an M3 Ultra Studio right now would be a terrible idea.

aleph_minus_one•1d ago
Slightly off-topic side remark:

Every mathematician and computer scientist should feel deeply confused that the M... Ultra is more powerful than the M... Max.

Why? Because if something is the maximum, there doesn't exist anything larger/better. :-)

user_7832•1d ago
This thing has been going on for a while unfortunately. If I told you "Here's an "air", a vanilla, and a "pro"", what would you expect? The vanilla is base, air is lighter, and pro is nicer, right?

Well, for iPads, the base has (had? Haven't closely followed them for a while) an older CPU for some reason. And the Air is actually a "Pro-lite", rather than a weight optimized version.

Don't get started on where the mini sits, or what happens if you want "nicer" features like 60hz+ displays in a small form factor... a feature that budget android tablets have had for years.

GeekyBear•1d ago
They should use something simple, like AMD Ryzen AI Max+ Pro 395?
hu3•1d ago
whataboutism doesn't make it any better, you know
calf•23h ago
Not really, Max is to violet as Ultra is to ultraviolet, as in something exceeding an established category (visible light)
guerrilla•1d ago
I wonder if the war with Iran could actually fix the RAM shortage. If this continues it really could put a damper on datacenter rollout.
kube-system•1d ago
How much of the capital investment that’s fueling the current expansion is already allocated?
guerrilla•22h ago
No idea. Let's assume not all of it.
mv4•1d ago
I tried finding a 128 - 256GB Mac Studio online and most options would not be shipped for at least "8-10 weeks".
j45•1d ago
More likely that the M5 Max Studio is coming out. the M5 Max Macbook Pros just came out.

Also the 512 GB ssd version has a slower SSD than anything 1 TB and up. The new SSDs on the M5 I believe are much faster and what's coming likely will receive that.

There's no doubt there's a ram shortage, and price increases, and the biggest companies in the world lock in their pricing well in advance, and the remaining leftovers are where the consumers experience shortages.

tencentshill•17h ago
This is 512GB RAM, not SSD. Kind of insane that has to be differentiated now.
j45•1m ago
Total read error on my part, thanks.

I guess it gives more credence to stitch a few more of them togther in the meantime.

827a•1d ago
IMO its more nuanced. They're likely in production ramp-up of the M5 Ultra Mac Studio, for release in the next ~3 months; they have pre-purchased bins of memory from the supply-constrained major memory supplies; and they need as much as they can get because they want to push an M5 Ultra config to 768gb to continue the "you can run local models" story that the M5 Max Macbook Pro started telling last week.

Going beyond 512gb and into 768gb memory is something of a threshold that will allow Apple to claim local capability for significantly more models. Qwen3-235B, Minimax M2.5, and GLM 4.7 could kind of run with no quantization on 512gb, but they'll comfortably run at 768gb. DeepSeek-V3.2 and GLM 5 may also work at some level of quantization.

taf2•1d ago
I hope you are right
criddell•1d ago
Is Apple telling the "you can run local models" story, or is it third parties?
wahnfrieden•1d ago
Yes Apple promotes local model capabilities in their marketing
827a•1d ago
Yes:

> A powerful Neural Accelerator is built into each GPU core of the M5 family of chips, which dramatically speeds up AI tasks like image generation from diffusion models, large language model (LLM) prompt processing, and on-device transformer model training. [1]

[1] https://www.apple.com/macbook-pro/

bytesandbits•23h ago
Apple
bytesandbits•14h ago
https://machinelearning.apple.com/research/exploring-llms-ml...
lolive•1d ago
What’s the price of that beast? #meCryingTearsOfBlood
827a•23h ago
Apple will be the first company to pioneer a new "work for tokens" program; simply commit yourself to six months of servitude with The Company to pay off your new Mac Studio purchase.
unfocused•23h ago
Less than $10,000, depending on what CPU and storage you select.
alsetmusic•20h ago
Heh heh, I see you're new to Apple's incredible shamelessness in price-gouging memory.

Just kidding, I'm sure you're aware. I just wouldn't be the least bit surprised to see them go well beyond that.

bombcar•23h ago
Let's assume a Mac Studio M5 would start at $1999, and that a M5 Max upgrade 128 GB would be about the 1000 it is now. Then an M5 Ultra 768 would be something like $1999 + 1000 + $4000 - cheaper than the current top of the line (which will never happen) so I'd presume about the same $10,000.

Or they could finally make the Mac Pro respectable and have it two M5 Ultra Mac Studios stuck together (or give it NUMA RAM: on chip + expandable).

rbanffy•10h ago
I was betting on the 1TB Mac Studio, but half a terabyte was already an insane amount of memory.
pjmlp•9h ago
Apple clearly no longer cares about workstation market, Mac Pro has joined OS X Server.
wil421•2h ago
Hasn’t in been that way for years? Almost all of the people I’ve seen selling used Mac Pros use them for creating music. I assume the studio is a better, cheaper option.
pjmlp•1h ago
Workstations are more than just music, and there are still a few folks that still believe Apple will some day release a new Mac Pro that fits their hardware needs, without having to go either Windows or Linux.

https://cottonbureau.com/p/TR4KZV/shirt/mac-pro-believe#/300...

rewgs•22h ago
My theory is that they're going to release a new Mac Pro that's about half the size of the current one. Enough space for some PCIe slots, but otherwise smaller given the enormous amount of wasted space in that thing since moving from Intel to Apple Silicon. Guessing the rack-mount model, should they continue selling it, will be 3 or 4u instead of 5u.

I know everyone thinks they're going to just kill it, but I don't see it. Apple's move under Tim Cook has been to exhaust supplies (see: filling the Intel Mac Pro chassis with air and not updating the CPU), letting people predict its death (see: 2013 -> 2019 Mac Pro silence), and then redesigning it into something people want while utilizing it as an opportunity to segment specs across their SKUs.

The Studio will remain the high-powered creator machine, whereas the Mac Pro will be retooled into an AI beast.

thefounder•19h ago
Why people buy the Studio with the high ram config is actually the unified memory. This is unique to Apple. I'm not sure what Mac Pro would do with PCIe cards . It would be useless for AI because what you want is unified memory that can be used by the GPU/AI not just ram.
rewgs•16h ago
PCIe cards would indeed be useless for AI unless Apple supports third-party GPUs, but there are certainly some pro creators that would still prefer to have them. I myself work in large-template film/game scoring and while we all love our Mac Studios, they're usually housed in a Sonnet chassis so that we can continue to use PCIe cards. Had Apple kept them in parity with the Studio w/r/t CPU and RAM, the rack-mount version of the Pro would've been a no-brainer.
827a•14h ago
Its not entirely unique to Apple: the Ryzen AI Max platform (in the e.g. Framework Desktop) is a unified memory platform. The PlayStation 5 also has a unified memory architecture (which given the chiplet was made by AMD, not too surprising) (people sleep on PlayStation hardware engineering; they're far better at skating to where the puck is headed than most hardware tech companies. remember Cell?)
thefounder•2h ago
Thank you! I was not aware of Framework Desktop. Unfortunately it seems it’s even more limited to ram(to 128GB vs studio 512GB on Mac studio)
pjmlp•9h ago
It is already a walking zombie, Apple clearly no longer cares about the workstation market, regardless of how many "I still believe" t-shirts get sold to wear at WWDC.
thefounder•19h ago
why just 768gb and not a ~1TB ?
halJordan•18h ago
You have to ask why Apple is going to nickel and dime you?
lbreakjai•1d ago
Not related, but why is the word "quiet" or "quietly" suddenly everywhere?
tom_•23h ago
Also mentioned here: https://news.ycombinator.com/item?id=47291513 - see the article section: "Quietly" and Other Magic Adverbs. Presumably the LLM writing style rubbing off, assuming the LLM hasn't been used to create the content in the first place.
functionmouse•23h ago
seems like people are using words like "the", "and", and the letter "e" a lot also. It must mean that everyone is a robot.
TiredOfLife•8h ago
That word has been used to turn any action into big evil conspiracy for a long time.
brcmthrowaway•23h ago
Why can't RAM be done inhouse? It's probably simpler than a CPU.. right?
juahan•23h ago
This is what I’ve been wondering as well.
wmf•22h ago
Maybe etching the deep trench capacitors is really tricky.
devonkelley•3h ago
Memory is becoming the new compute bottleneck for AI workloads and Apple just accidentally confirmed it. The whole "run it locally" story for agents depends on having enough RAM to keep the model and its context window resident. Every GB of memory you don't have is a capability you can't run. This matters way more than clock speed right now.