frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Is your site agent-friendly?

https://agentprobe.io/
1•kukicola•23s ago•0 comments

Combinatorial Optimization for All: Using LLMs to Aid Non-Experts

https://journal.iberamia.org/index.php/intartif/article/view/2584
1•camilochs•1m ago•0 comments

Show HN: Pooch PDF – Because Ctrl+P still prints cookie banners in 2026

https://poochpdf.com/
1•membrshiperfect•2m ago•0 comments

How to get large files to your MCP server without blowing up the context window

https://everyrow.io/blog/mcp-large-dataset-upload
1•rafaelpo•2m ago•0 comments

Patterns for Reducing Friction in AI-Assisted Development

https://martinfowler.com/articles/reduce-friction-ai/
1•zdw•3m ago•0 comments

Salt of the Earth: Underground Salt Caverns Just Might Power Our Future

https://eos.org/features/salt-of-the-earth-vast-underground-salt-caverns-are-preserving-our-histo...
1•jofer•4m ago•0 comments

Show HN: Open-sourced an email QA lib 8 checks across 12 clients in 1 audit call

https://github.com/emailens/engine
1•tikkatenders•4m ago•0 comments

Low-Dose Lithium for Mild Cognitive Impairment: Pilot Randomized Clinical Trial

https://jamanetwork.com/journals/jamaneurology/fullarticle/2845746
1•bookofjoe•5m ago•0 comments

Show HN: AfterLive – AI digital legacy that lets loved ones hear from you

https://afterlive.ai
1•crawde•6m ago•0 comments

I Used Claude to File My Taxes for Free

https://kachess.dev/taxes/ai/personal-finance/2026/02/27/breaking-up-with-turbotax.html
1•gdudeman•6m ago•0 comments

Israel bombs council choosing Iran's next supreme leader, official says

https://www.axios.com/2026/03/03/iran-supreme-leader-council-israel-strike
1•spzx•7m ago•0 comments

Software development now costs less than than the wage of a minimum wage worker

https://ghuntley.com/real/
1•herbertl•8m ago•0 comments

A [Firefox, Chromium] extension that converts Microsoft to Microslop

https://addons.mozilla.org/en-US/android/addon/microslop/
2•gaius_baltar•8m ago•0 comments

British Rail settlement plan barcode specs

https://magicalcodewit.ch/rsp-specs/
1•fanf2•9m ago•0 comments

Completing the formal proof of higher-dimensional sphere packing

https://www.math.inc/sphere-packing
1•carnevalem•9m ago•0 comments

Show HN: Verifiable Interaction Records for Agents

https://github.com/peacprotocol/peac
1•jithinraj•11m ago•0 comments

Ohio EPA weighs allowing data centers to dump wastewater into rivers

https://www.nbc4i.com/news/local-news/columbus/ohio-epa-weighs-allowing-data-centers-to-release-w...
2•randycupertino•12m ago•1 comments

What if LLM uptime was a macroeconomic indicator?

https://lab.sideband.pub/status/
1•shawnyeager•12m ago•0 comments

Watch Out Bluetooth Analysis of the Coros Pace 3 (2025)

https://blog.syss.com/posts/bluetooth-analysis-coros-pace-3/
1•lqueenan•12m ago•0 comments

Risk, in Perspective

https://faingezicht.com/articles/2026/03/02/risk-in-perspective/
1•avyfain•13m ago•0 comments

No mentor? Learn from a 16th century French nobleman

https://www.magicreader.com/montaigne
1•mzelling•13m ago•0 comments

Show HN: I built a way to prove your software kept its promises

https://github.com/nobulexdev/nobulex
1•arian_•14m ago•0 comments

How do I market myself as a freelance Backend/Infrastructure engineer?

1•__0x01•14m ago•0 comments

The Limits of Today's AI Systems

2•Yinfan•14m ago•0 comments

Accept-Language Redirects Could Be Blocking Search Engines and AI Crawlers

https://merj.com/blog/your-accept-language-redirects-could-be-blocking-search-engines-and-ai-craw...
1•giacomoz•14m ago•0 comments

Is Unbound AI Video the most uncensored AI model in 2026?

https://unbound.video
1•gabrieln•14m ago•2 comments

Drizzle Joins PlanetScale

https://planetscale.com/blog/drizzle-joins-planetscale
4•alexblokh•15m ago•2 comments

Political market entropy in Rome. An analysis of different electoral cycles

https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2026.1744381/full
1•PaulHoule•15m ago•0 comments

Show HN: Readme badge to quickly find related open source repos

https://relatedrepos.com/badge
1•plurch•15m ago•0 comments

Apollo sued for allegedly concealing Epstein business ties from shareholders

https://www.reuters.com/sustainability/boards-policy-regulation/apollo-leon-black-sued-allegedly-...
1•petethomas•16m ago•0 comments
Open in hackernews

Apple Introduces MacBook Pro with All‑New M5 Pro and M5 Max

https://www.apple.com/newsroom/2026/03/apple-introduces-macbook-pro-with-all-new-m5-pro-and-m5-max/
237•scrlk•1h ago

Comments

miohtama•1h ago
But is it powerful enough to run Liquid glass?
MattDamonSpace•1h ago
/s I assume but it’s crazy to me that LG runs on the watch
Y-bar•1h ago
Apple TV 4K can’t run the Liquid Glass interface without stuttering, turning off transparency restores fluid (heh!) animations.
layer8•1h ago
Unlikely.
boriskourt•1h ago
Nice starting storage bump

  MacBook Pro with M5 Pro now comes standard with 1TB of storage, while MacBook Pro with M5 Max now comes standard with 2TB. And the 14-inch MacBook Pro with M5 now comes standard with 1TB of storage.
zarzavat•1h ago
It's not exactly a bump if they raise prices at the same time, though with the RAM situation I'm not mad.
SirMaster•50m ago
Well 1TB MacBook Pro used to cost $1799, now 1TB is the base model and costs $1699, so it's actually a $100 price drop for 1TB storage.
aurareturn•1h ago
Whoah, both the Pro and Max CPUs feature 18 cores. This hasn't happened since M1 Pro/Max. This is a surprise.

Also, the mix of cores have changed drastically.

- 6 "Super cores"

- 12 "Performance cores"

I'm guessing these are just renamed performance and efficiency cores from previous generations.

This is a massive change from the M4 Max:

- 12 performance cores

- 4 efficiency cores

This seems like a downgrade (in core config but may not be in actual MT) assuming super = performance and performance = efficiency cores.

cced•1h ago
So they renamed performance to mean efficiency and are now using super in place of performance?
petu•48m ago
Super is old "performance" core:

> The industry-leading super core was first introduced as performance cores in M5, which also adopts the super core name for all M5-based products

But new "performance" is claimed to be new design (= not just overclocked efficiency core from M5?):

> M5 Pro and M5 Max also introduce an all-new performance core that is optimized to deliver greater power-efficient, multithreaded performance for pro workloads.

quotes from https://www.apple.com/newsroom/2026/03/apple-debuts-m5-pro-a...

netruk44•1h ago
I think super cores are a new type/tier of core, not a rename of performance.

The base M5 has super/efficiency cores.

The Pro and Max have super/performance cores.

klausa•52m ago
I don't think the "new" Performance cores are just "renamed" "E" / "Efficiency" cores; Apple has retroactively renamed the baseline M5 nomenclature to say it has "10-core CPU with 4 super cores and 6 efficiency cores"; so they're clearly keeping the "efficiency cores" nomenclature around.

I think this is a new design, with Apple having three tiers of cores now, similar to what Qualcomm has been doing for a while.

I think how it breaks down is:

- "Super" are the old "P" cores, and the top tier cores now

- "Performance" cores are a new tier and seen for the first time here, slotting between "old" P and E in performance

- "Efficiency" / "E" are still going to be around; but maybe not in desktop/Pro/Max anymore.

aurareturn•39m ago
Interesting. This is clearly a big CPU change if so. I wonder why no E cores. I’m sure E cores would be more efficient at OS tasks than the new performance cores.

For example, 6 super, 8 performance, and 4 efficiency.

NetMageSCW•25m ago
Another commenter stated the P cores can be scaled down to be E cores dynamically, so why not?
jacobp100•31m ago
I was looking into this. The M5 performance cores can be scaled down to match efficiency cores in performance and power usage.

I believe they lower the clock speed, limit how much work is done in parallel on each core, and limit how aggressive the speculative execution is so less work is wasted.

Tangokat•1h ago
"Scaling up performance from M5 and offering the same breakthrough GPU architecture with a Neural Accelerator in each core, M5 Pro and M5 Max deliver up to 4x faster LLM prompt processing than M4 Pro and M4 Max, and up to 8x AI image generation than M1 Pro and M1 Max."

Are they doubling down on local LLMs then?

I still think Apple has a huge opportunity in privacy first LLMs but so far I'm not seeing much execution. Wondering if that will change with the overhaul of Siri this spring.

jahller•1h ago
looks like this will be their angle for the whole agentic AI topic
Sharlin•1h ago
"Apple Intelligence is even more capable while protecting users’ privacy at every step."

Remains to be seen how capable it actually is. But they're certainly trying to sell the privacy aspect.

re-thc•15m ago
> Remains to be seen how capable it actually is.

It's the best. We all turned it off. 100% privacy.

whizzter•1h ago
We had a workshop 6 months ago and while I've always been sceptical of OpenAI,etc's silly AGI/ASI claims, the investments have shown the way to a lot of new technology and has opened up a genie that won't be put back into the bottle.

Now extrapolating in line with how Sun servers around year 2000 cost a fortune and can be emulated by a 5$ VPS today, Apple is seeing that they can maybe grab the local LLM workloads if they act now with their integrated chip development.

But to grab that, they need developers to rely less on CUDA via Python or have other proper hardware support for those environments, and that won't happen without the hardware being there first and the machines being able to be built with enough memory (refreshing to see Apple support 128gb even if it'll probably bleed you dry).

fny•1h ago
I feel like the push by devs towards Metal compatibility has been 10x than AMD. I assume that's because the majority of us run MacBooks.
davidmurdoch•1h ago
Who is "us" in this case? Majority of devs that took the stack overflow survey use Windows:

https://survey.stackoverflow.co/2025/technology/#1-computer-...

pdpi•1h ago
I think it's reasonable to say that the people responding to surveys on Stack Overflow aren't the same people who work on pushing the state of the art in local LLM deployment. (which doesn't prove that that crowd is Apple-centric, of course)
davidmurdoch•55m ago
Perhaps. Though Windows has been the majority share even when stack overflow was at it's peak, and before.
AdamN•1h ago
That's the broad developer community. 90%+ of the engineers at Big Tech and the technorati startups are on MacOS with 5% on Linux and the other 5% on Windows.
davidmurdoch•57m ago
Source?
re-thc•13m ago
> 90%+ of the engineers at Big Tech and the technorati startups

The US 1s? Is that why we have Deepseek and then other non-US open source LLMs catching up rapidly?

World view please. The developer community is not US only.

seanmcdirmid•7m ago
You’ll see a lot of MacBooks in Beijing’s zhongguangcun where all the tech companies are, but they also have a lot of students there as well, so who knows. You need to go out to the suburbs where Lenovo has offices to stop seeing them. I know Apple is common in Western Europe having lived there for two years (but that was 20 years ago, I lived in China for 9 years after that).

It wouldn’t surprise me if the deepseek people were primarily using Mac’s. Maybe Alibaba might be using PCs? I’m not sure.

JCharante•53m ago
Majority of devs are in the global south I presume
whizzter•25m ago
I think that might be partly because on regular PC's you can just go and buy an NVidia card insteaf of fuzzing around with software issues, and for those on laptops they probably hope that something like Zluda will solve it via software shims or MS backed ML api's.

Basically, too many choices to "focus on" makes non a winner except the incumbent.

freeone3000•56m ago
Torch mlp support on my local macbook outperforms CUDA T4 on Colab.
aurareturn•1h ago

  Are they doubling down on local LLMs then?
Neural Accelerator was present in iPhone 17 and M5 chip already. This is not new for M5 Pro/Max.

Apple's stated AI strategy is local where it can and cloud where it needs. So "doubling down"? Probably not. But it fits in their strategy.

butILoveLife•1h ago
I think its just marketing, and the marketing is working. Look how many people bought Minis and ended up just paying for API calls anyway. (Saw it IRL 2x, see it on reddit openclaw daily)

I don't mind it, I open Apple stock. But I'm def not buying into their rebranding of integrated GPU under the guise of Unified Memory.

Hamuko•1h ago
I've tried to use a local LLM on an M4 Pro machine and it's quite painful. Not surprised that people into LLMs would pay for tokens instead of trying to force their poor MacBooks to do it.
giancarlostoro•1h ago
What are the other specs and how's your setup look? You need a minimum of 24GB of RAM for it to run 16GB or less models.
SV_BubbleTime•50m ago
This is typically true.

And while it is stupid slow, you can run models of hard drive or swap space. You wouldn’t do it normally, but it can be done to check an answer in one model versus another.

Hamuko•40m ago
48 GB MacBook Pro. All of the models I've tried have been slow and also offered terrible results.
atwrk•1h ago
Local LLM inference is all about memory bandwidth, and an M4 pro only has about the same as a Strix Halo or DGX Spark. That's why the older ultras are popular with the local LLM crowd.
freeone3000•57m ago
I’m super happy with it for embedding, image recog, and semantic video segmentation tasks.
jsheard•1h ago
> Look how many people bought Minis and ended up just paying for API calls anyway. (Saw it IRL 2x, see it on reddit openclaw daily)

Aren't the OpenClaw enjoyers buying Mac Minis because it's the cheapest thing which runs macOS, the only platform which can programmatically interface with iMessage and other Apple ecosystem stuff? It has nothing to do with the hardware really.

Still, buying a brand new Mac Mini for that purpose seems kind of pointless when a used M1 model would achieve the same thing.

ErneX•1h ago
It’s exactly that. They are buying the base model just for that. You are not going to do much local AI with those 16GB of ram anyway, it could be useful for small things but the main purpose of the Mini is being able to interact with the apple apps and services.
chaostheory•25m ago
No one is buying a base model Mac for local LLM. Everyone is forgetting the PC prices have drastically increased due to RAM and SSD. Meanwhile, Macs had no such price change… at least for the models that didn’t just drop today. Mac’s are just a good deal at the moment.
briffle•5m ago
the new models cost $200 more for each 8GB of Ram you add.. Ouch...
jsheard•4m ago
Yeah, because the Mac upgrade prices were already sky high. The cost of buying 32GB of DDR5-6000 for a PC rocketed from $100 to $500, while the cost of upgrading a 16GB Mac to 32GB was and still is $400.
llmslave•1h ago
yes, and its funny that all these critical people dont know this
re-thc•54m ago
> Aren't the OpenClaw enjoyers buying Mac Minis because it's the cheapest thing which runs macOS

That's likely only part of the reason. Mac Mini is now "cheap" because everyone exploded in price. RAM and SSD etc have all gone up massively. Not the mention Mac mini is easy out of the box experience.

CrazyStat•42m ago
It's not cheap, though. Two weeks ago I bought a computer with a similar form factor (GMKtec G10). Worse CPU and GPU but same 16GB memory and a larger SSD for 40% the price of a base mac mini ($239 vs $599). It came with Windows preinstalled, but I immediately wiped that to install linux. Even a used (M-series) mac mini is substantially more expensive. It will cost me about an extra penny per day in electricity costs over a mac mini, but I won't be alive long enough for the mac mini to catch up on that metric.

I considered the mac mini at the time, but the mac mini only makes sense if you need the local processing power or the apple ecosystem integration. It's certainly not cheaper if you just need a small box to make API calls and do minimal local processing.

re-thc•33m ago
> It came with Windows preinstalled, but I immediately wiped that to install linux.

Do you really need Openclaw now? And not claude code + zapier or Claude code + cron?

That's the point. If you have worse CPU and GPU Windows will be sluggish (it's bloated).

nicoburns•29m ago
If you need the CPU power in the Mac Mini then it is a pretty good price-to-performance ratio.
stanmancan•17m ago
It's cheap for what you get.

If you just need "a small box to make API calls and do minimal local processing" you an also just buy a RPI for a fraction of the price of the GMKtec G10.

All 3 serve a different purpose; just because you can buy a slower machine for less doesn't mean the price:performance of the M1 Mac Mini changes.

philistine•50m ago
There are so few used Mac Mini around, those are all gone and what is left is to buy new.
jermaustin1•7m ago
Worse than that, they hold their value, so buying a used M1 mini is still a few hundred bucks, and saving $200-300 by purchasing a 5 generation older mini seems like a bad deal in comparison.
BeetleB•45m ago
Can't they simply run MacOS on a VM on existing Mac hardware?
shuckles•36m ago
You aren’t going to run a network connected 24/7 online agent from a laptop because it’s battery powered and portable.
mleo•1h ago
My M4 MacBook Pro for work just came a few weeks ago with 128 GB of RAM. Some simple voice customization started using 90GB. The unified memory value is there.
rafram•55m ago
Why not? The integrated GPUs are quite powerful, and having access to 32+ GB of GPU memory is amazing. There's a reason people buy Macs for local LLM work. Nothing else on the market really beats it right now.
threatofrain•54m ago
The biggest problem with personal ML workflows on Mac right now is the software.
cmdrmac•10m ago
I'm curious to know what software you're referring to.
lizknope•45m ago
Jeff Geerling had a video of using 4 Mac Studios each with 512GB RAM connected by Thunderbolt. Each machine is around $10K so this isn't cheap but the performance is impressive.

https://www.youtube.com/watch?v=x4_RsUxRjKU

Greed•12m ago
If 40k is the barrier to entry for impressive, that doesn't really sell the usecase of local LLMs very well.

For the same price in API calls, you could fund AI driven development across a small team for quite a long while.

Whether that remains the case once those models are no longer subsidized, TBD. But as of today the comparison isn't even close.

lynx97•1h ago
The topic is MacBook, so my criticism is a little off. However, I really dont believe in this "local LLM" promise from Apple. My phone already gets noticeably warm if I answer 5 WhatsApp messages. And looses 5% of battery during the process. I highly doubt Apple will have a useable local LLM that doesn't drain my battery in minutes, before 2030.
cosmic_cheese•1h ago
Something is not right if WhatsApp is seriously draining your phone like that. Admittedly I’m not a big WhatsApp user my iPhone hasn’t had any trouble like that with it.
jakeydus•1h ago
Yeah is OP using an iPhone X?
Aurornis•1h ago
The hardware capabilities that make local LLMs fast are useful for a lot of different AI workloads. Local LLMs are a hot topic right now so that’s what the marketing team is using as an example to make it relatable.
kilroy123•1h ago
I've been so disappointed in Apple's lack of execution on this. There is so much potential for fantastic local models to run and intelligently connect to cloud models.

I just don't get why they're dropping the ball so much on this.

NetMageSCW•33m ago
Because it won’t sell enough hardware to matter to them.

They aren’t dropping the ball, they are being smart and prudent.

game_the0ry•1h ago
> Are they doubling down on local LLMs then?

Honestly, I think that's the move for apple. They do not seem to have any interest in creating a frontier lab/model -- why would they give the capex and how far behind they are.

But open source models (Kimi, Deepseek, Qwen) are getting better and better, and apple makes excellent hardware for local LLMs. How appealing would it be to have your own LLM that knows all your secrets and doesnt serve you ads/slop, versus OpenAI and SCam Altman having all your secrets? I would seriously consider it even if the performance was not quite there. And no need for subscription + cli tool.

I think apple is in the best position to have native AI, versus the competition which end up being edge nodes for the big 4 frontier labs.

andy_ppp•1h ago
It is simply marketing nonsense - what they really mean (I think) is they support matrix multiplication (matmul) at the hardware level which given AI is mostly matrix multiplications you'll get much faster inference (and some increase in training too) on this new hardware. I'm looking forward to seeing how fast a local 96gb+ LLM is on the M5 Max with 128gb of RAM.
ivankra•1h ago
But memory bandwidth (bottleneck for LLM inference) is only marginally improved, 614 GB/s vs 546 GB/s for M4/M5 Max - where is this 4x improvement coming from?

I think I'll pass on upgrading.

singhrac•1h ago
It’s prompt processing so prefill - that’s compute bound not memory.
general_reveal•1h ago
It’s not necessarily doubling down on local. The reality is your LLM should be inferencing every tick … the same way your brain thinks every. Fucking. Nano. Second.

So yes, the LLM should be inferencing on your prompt, but it should also be inferencing on 25,000 other things … in parallel.

Those are the compute needs.

We just need compute everywhere as fast as possible.

Lalabadie•1h ago
There already are a bunch of task-specific models running on their devices, it makes sense to maintain and build capacity in that area.

I assume they have a moderate bet on on-device SLMs in addition to other ML models, but not much planned for LLMs, which at that scale, might be good as generalists but very poor at guaranteeing success for each specific minute tasks you want done.

In short: 8gb to store tens of very small and fast purpose-specific models is much better than a single 8gb LLM trying to do everything.

Munachi1869•54m ago
Probably possible for pure coding models. I see on-device models becoming viable and usable in like 2-3 years on device
Someone1234•1h ago
Apple's AI strategy really kind of threads the needle cleverly.

"AI" (LLMs) may or may not have a bubble-pop moment, but until it does Apple get to ride it on these press releases and claims. But if the big-pop occurs, then Apple winds up with really fantastic hardware that just happens to be good at AI workloads (as well as general computing).

For example, image classification (e.g. face recognition/photo tagging), ASR+vocoders, image enhancement, OCR, et al, were popular before the current boom, and will likely remain popular after. Even if LLM usage dries up/falls out of vogue, this hardware still offers a significant user benefit.

ChrisGreenHeur•1h ago
those things could likely just run fine on the gpu though
Someone1234•1h ago
They could run fine on the CPU too. But these are mobile devices, therefore battery usage is another significant metric. Dedicated hardware is more energy efficient than general hardware, and GPU in particular is a power-hog.
vel0city•35m ago
Exactly. It's the same thing as video or audio encoding and decoding. Sure the CPU could do it, potentially use the GPU, but having actual hardware encoders and decoders for the most common codecs will save a lot of energy.
jmyeet•1h ago
Apple absolutely has a massive opportunity here because they used a shared memory architecture.

So as most people in or adjacent to the AI space know, NVidia gatekeeps their best GPUs with the most memory by making them eye-wateringly expensive. It's a form of market segmentation. So consumer GPUs top out at 16GB (5090 currently) while the best AI GPUs (H200?) is 141GB (I just had to search)? I think the previou sgen was 80GB.

But these GPUs are north of $30k.

Now the Mac Studio tops out currently at 512GB os SHARED memory. That means you can potentially run a much larger model locally without distributing it across machines. Currently that retails at $9500 but that's relatively cheap, in comparison.

But, as it stands now, the best Apple chips have significantly lower memory bandwidth than NVidia GPUs and that really impacts tokens/second.

So I've been waiting to see if Apple will realize this and address it in the next generation of Mac Studios (and, to a lesser extend, Macbook Pros). The H200 seems to be 4.8TB/s. IIRC the 5090 is ~1.8TB/s. The best Apple is (IIRC) 819GB/s on the M3 Ultra.

Apple could really make a dent in NVidia's monopoly here if they address some of these technical limitations.

So I just checked the memory bandwidth of these new chips and it seems like the M5 is 153GB/s, M5 Pro is ~300 and M5 Max is ~600. I was hoping for higher. This isn't a big jump from the M4 generation. I suspect the new Studios will probably barely break 1TB/s. I had been hoping for higher.

SirMaster•57m ago
>So consumer GPUs top out at 16GB (5090 currently)

5090 has 32GB, and the 4090 and 3090 both have 24GB.

ericd•46m ago
Hard to get 6000+ bit memory bus HBM bandwidth out of a 512 or 1024 bit memory bus tied to DDR... I think it's also just tough to physically tie in 512 gigs close enough to the GPU to run at those speeds. But yeah, I wish there was a very competitive local option, too, short of spending $50k+.
meisel•1h ago
What % of users actually care that much about local LLMs? It appears to still be an inferior (though maybe decent) service compared to ChatGPT etc., and requires very top-end hardware. Is privacy _that_ important to people when their Google search history has been a gateway to the soul for years? I wonder if these machines would cost significantly less (or put the cost to other things, e.g. more CPU cores) without this emphasis on LLMs.
barrell•59m ago
Privacy is definitely not a cern for the layman, but it is for lots of people, especially pro users. I also haven’t made a google search in years.

I also haven’t seen any improvements in the frontier models in years, and I’m anxiously awaiting local models to catch up.

m3kw9•57m ago
A useful llm that needs 64gb of ram and mid double digit cores is not useful for 99% of their customers. The LLMs they have on iphone 17's certainly cannot do anything useful other than summerization and stuff. It's a hardware constraint that they have.
tiffanyh•52m ago
> Are they doubling down on local LLMs then?

Apple is in the hardware business.

They want you to buy their hardware.

People using Cloud for compute is essentially competitive to their core business.

neya•45m ago
> I still think Apple has a huge opportunity in privacy first LLMs

This correlation of Apple and privacy needs to rest. They have consistently proven to be otherwise - despite heavily marketing themselves as "privacy-first"

https://www.theguardian.com/technology/2019/jul/26/apple-con...

chaostheory•18m ago
Not for everything. Apple has initially focused on edge AI that runs locally per device. It didn’t work out well the first try, but I would still bet on them trying again once compute catches up. Besides, they still have a better track record than the other tech giants.
4fterd4rk•6m ago
I think it's a little telling that the best you can do is a seven year old article.
icar•43m ago
Didn't they announce a partnership with Google Gemini?
ignoramous•35m ago
> doubling down on local LLMs

Do think it'll be common to see pros purchasing expensive PCs approaching £25k or more if they could run SoTA multi-modal LLMs faster & locally.

woadwarrior01•17m ago
> Are they doubling down on local LLMs then?

Neural Accelerators (aka NAX) accelerates matmults with tile sizes >= 32. From a very high level perspective, LLM inference has two phases: (chunked) prefill and decode. The former is matmults (GEMM) and the latter is matrix vector mults (GEMV). Neural Accelerators make the former (prefill) faster and have no impact on the latter.

whizzter•1h ago
128gb of memory, it's a nice change for Apple not to lag in that department for once, wonder what such a machine will cost though.
Sharlin•1h ago
At today's prices, the memory will probably cost more than the rest of the hardware combined :P
Detrytus•1h ago
128gb was there for a while. I am kind of disappointed they do not have 256gb option.
ajdude•1h ago
I was really hoping to see 512gb but I guess they don't want it to cut into the sales of the Studio.
varispeed•1h ago
Same here. If the had 256GB option I'd pull a trigger. Now I might be looking for alternatives.
vardump•44m ago
No 256 GB model, so no purchase. What a shame.
jeroenhd•1h ago
Checking Apple's store, I can't find a cheaper configuration than $5100 for the M5 + 128GiB version.

Here in Europe, including 21% VAT, that's €6.124,00 ($7.094,35 equivalent).

Because of pricing strategies and such, the 128GiB version comes with a 2TiB SSD at minimum, and also requires the M5 Max (not Pro) at its highest configuration.

Not sure if this is new, but it should be noted that these laptops don't come with a charger any more.

alwillis•46m ago
In the US, power adapters are included:

    70W USB-C Power Adapter (included with M5 Pro with 16-core GPU)

    96W USB-C Power Adapter (included with M5 Pro with 20-core GPU, configurable with M5 Pro with 16-core GPU)

    USB-C to MagSafe 3 Cable (2 m)
snowchaser•1h ago
In US, going to 128 GB from 32 is $1500 extra. However 32 GB is only offered with the 32 core version and 128 only with the 40 core version.
nsbk•1h ago
The hardware looks amazing! Too bad they will ship with Tahoe installed. I’m not upgrading until I see in which direction the next Mac OS release goes
carlmr•1h ago
I've upgraded to Tahoe at 26.2, zero complaints from my side. Haven't had any runaway memory leaks or similar that were reported.
arianvanp•1h ago
Closing Tabs in Safari till takes more than a second though. And if you hold Cmd-W to close all of them it just completely locks up and crashes. Still not fixed since the release of Safari 26.

Literally unusable

alwillis•1h ago
I’ve been running the macOS 26.4 beta and have none of these issues.
nhubbard•1h ago
I will say that 26.4 beta 2 was the first time I've regretting using betas since Sonoma beta 2. The Sonoma beta ruined the firmware on my machine and Apple had to replace the logic board; the latest Tahoe beta broke all networking on my machine and I had to erase the installation to fix everything. I've since dropped off the beta train for the time being.

I already left the beta train on my iPhone because I had too many issues getting my grocery apps to allow me to place orders without going to my laptop and doing it in a web browser.

nozzlegear•1h ago
Never had this problem, been on Tahoe since it released. My safari tabs are buttery, silken smooth.
Analemma_•53m ago
I'm on an M4 Pro MacBook-- basically the fastest computer you could buy from Apple before today-- and opening/closing the tab sidebar in Safari on Tahoe takes multiple seconds, even if I have only 4-6 tabs open, and seems to drop to 5 FPS. It's comically bad.

It's so bad I switched back to Chrome. I had thought Chrome had a major battery life penalty compared to Safari on Macs, but I checked more up-to-date info and apparently that's outdated.

AdamN•51m ago
Works fine for me. I wonder if you have some extension or script on one of the sites you use slowing down the tab closure.
jillesvangurp•45m ago
Same here. I know some people are unhappy with some of the UX tweaks but honestly I don't notice much of it. The whole liquid glass thing is a bit gimmicky. Other than that, I don't see much difference. The rounded corners on windows are a bit silly. But I don't spend a lot of time fiddling with windows. Most of my windows are maximized (not full screen). I'm sure there are other issues people dislike that I just haven't noticed.

I use my laptop for development. I don't actually use most of the built in applications. My browser is Firefox, I use codex, vs code, intellij, iterm2, etc. Most of that works just fine just as it did on previous versions of the OS. I actually on purpose keep my tool chains portable as I like to have the option to switch back to Linux when I want to. I've done that a few times. I come back for the hardware, not the OS.

In my experience, if you don't like Apple's OS changes that is unfortunate but they don't seem to generally respond to a lot of the criticism. Your choices are to get further and further out of date, switch to something else, or just swallow your pride. Been there done that. Windows is a "Hell No" for me at this point. I'll take the UX, with all the pastel colors that came and went and all the other crap that got unleashed on macs over the last ten years. Definitely a case of the grass not being greener on Windows. Even with the tele tubby default desktop in XP back in the day.

I can deal with Linux (and use that on and off on one of my laptops). However, that just doesn't run that well on mac hardware. And any other hardware seems like a big downgrade to me. Both Windows and Linux are arguably a lot worse in terms of UX (or lack thereof). Linux you can tweak. And you kind of have to. But it just never adds up to consistent and delightful. Windows, well, at this point liking that is probably a form of Stockholm Syndrome. If that doesn't bother you, good for you.

So, Mac OS it is for me as everything else is worse. I've in the past deferred updates to new versions of Mac OS as well. Generally you can do that for a while but eventually it becomes annoying when things like homebrew and other development toys start assuming you run something more recent. And of course for security reasons you might just not drag your feet too long. Just my personal, pragmatic take.

hu3•1h ago
Just yesterday, my colleague's mac Time Machine couldn't recover backup and they had to reinstall everything.

But I think this predates Tahoe.

zarzavat•49m ago
Silent corruption has been a feature of Time Machine for the last 19 years. But haven't you seen the new glass effects, isn't it cool?
satoqz•1h ago
This. I have been a big (and loud) fan of M-series hardware from the beginning, but if Apple is going to keep making their software worse, I will find myself lingering on older generations that run Asahi Linux or going back to a traditional x86_64 laptop instead of buying into new generations.
pier25•1h ago
Yeah this is a real issue with these new Macs. I would wait until macOS 27 to see the direction Apple takes.
satvikpendem•59m ago
The next macOS will be touch screen centric with elements getting bigger when you're close to touching them, rumors say. That being said, I run Tahoe and it works perfectly fine to me, I am not sure what issues people have with it. Sure, some corner radii aren't exactly the same but I honestly couldn't give less of a shit as long as it runs the programs I need.
nsbk•46m ago
Safari routinely using 20+ Gb of memory with a handful of tabs open. Safari tabs refusing to close. Unresponsive System Settings window. Random application freezes and crashes, Apple Music not playing music. This is on a 32Gb M1 Max. My M1 Air on Sequoia doesn't experience any of these issues, even if it has half the unified memory.
satvikpendem•41m ago
I never had any of those issues, but then again I don't use Safari or other Apple apps like music.
egwor•1h ago
I thought that new models were typically released in October. Have I misremembered or is this an unusual timing vs previous years? If so, I wonder why the earlier release?
afavour•1h ago
Increasing component prices perhaps? Get some sales in before you have to jack up the sale price.
alwillis•58m ago
Prices aren’t likely to change. Even when the tariffs were on, Apple’s prices didn’t change; they gave up some margin.

They also probably had RAM contracts in place far enough in advance to avoid the worst of the price spikes.

ErneX•1h ago
You remember well, they didn’t update these last fall.

And another rumor said these are going to be updated again this fall but I’m not sure about that. With OLED screens and M6 (supposedly).

cheschire•1h ago
Maybe they want people to have more money available for the new phones later this year, since that market is in decline.
chippiewill•1h ago
They didn't update them last October is why.

I think at this point Apple will just release new versions of laptops whenever new CPU revisions and yields allow. M5 Pro wasn't ready for October so delayed until now.

layer8•1h ago
M6 is rumored to be released in Q4.
wincy•1h ago
I typed “RAM” to search for it and boy they hammer home how lucky I am to be getting 1TB SSD standard, but no mention of RAM anywhere on this page. Anyway, the MacBook Pro starts with 16GB of RAM. It’s $400 to go from 16GB to 32GB.

Interestingly, 36-128GB models are showing as “currently unavailable” on the store page, and you can’t even place an order for them right now? But for anyone curious, it’s quoting $5099 for the 128GB RAM 14” MacBook Pro model.

tonyedgecombe•1h ago
>Anyway, it starts with 16GB of RAM. $400 to go from 16GB to 32GB

Interesting that this hasn't budged since the memory shortages appeared.

WarmWash•1h ago
Fair chance that Apple has price/purchase agreements already in place. Consumers are left to fight over the excess capacity after megabuyers get their orders filled.
mschuster91•1h ago
> Interesting that this hasn't budged since the memory shortages appeared.

Apple has had enough war chests with the ability of buying the entirety of TSMC's new capacity years in advance in the past.

If I were to guess, Apple locked in their entire BOM and production capacity two years ago. That's something even the large players cannot replicate because they run cash-lean and have too many different SKUs, and the small players (Framework, System76, even Steam) are entirely left to the forces of the markets.

lm28469•1h ago
They sell you 1gb LPDDR5X for $25 while buying it at $5, don't worry for their margins...
jeroenhd•1h ago
I know RAM is scarce and everything, but doubling down on LLM local acceleration with all of that dedicated silicon while at the same time sticking with Apple's traditional lack of RAM availability makes for a very weird product proposition to me.
aurareturn•1h ago
It starts at 16GB for the base M5 and 24GB for the Pro/Max. It's been like this.
edvinasbartkus•1h ago
on Silicon Mac's it's never called RAM, it "unified memory"
lxgr•1h ago
I'm honestly just glad they don't brand this as "1016 MB of unified memory". Swap and ramdisks are a thing, after all...
TheCapeGreek•1h ago
Apple's RAM price bumps were already insane, now they'll get worse.
ezfe•1h ago
They’re literally not changing
hu3•1h ago
It did change. They bumped $200 on the entire line. So even the 16GB version is more expensive.

I'd love to have customers like Apple. Bumps $200: "it didn't change!!!"

And no power adapter included.

mschuster91•1h ago
> And no power adapter included.

To be fair, ever since the advent of high power USB-C PD that really, really is not needed any more, way too many power bricks are effectively e-waste.

People already have USB-C power bricks and docks everywhere and unlike pre-USB-C generations, you can use them not just across different generations of hardware, but across vendors as well.

NetMageSCW•30m ago
I doubt if that many have USB-C high power bricks unless they are upgrading from another USB-C laptop.
vile_wretch•56m ago
The EU forbids them from including power adapters. They're still included everywhere else.
SirMaster•53m ago
You mean bumped $100. M4 MacBook Pro and M5 MacBook Pro started at $1599 with 512GB SSD.

Now it starts at $1699, a $100 bump but comes with a 1TB SSD. Previously it would have cost $1799 for the 1T SSD, so it's a $100 bump on base price but you are also getting 1TB SSD for $100 less than before.

re-thc•53m ago
> It did change. They bumped $200 on the entire line.

I wonder if that would happen regardless of RAM, e.g. for tariffs etc.

jsheard•1h ago
> It’s $400 to go from 16GB to 32GB.

No change from the previous models then, 16GB->32GB was already $400. They're cutting into their previously enormous margins to keep the prices stable, rather than hiking the prices to maintain their margins.

philistine•46m ago
They bought the fab time for that RAM 2-3 years ago. Apple is renowned for their foresight and preparation. We'll eventually see price increases from Apple's RAM upgrade, but we're not there yet.
daveidol•45m ago
Their margins may not have changed actually. https://youtu.be/IGCzo6s768o
niwtsol•37m ago
This is not exactly correct. If you have an M5 Pro chip instead of m5 Chip - I just built a 16inch, M5 Pro chip, it is $400 to go from 24 -> 48gb. An additional $200 ($600 over base) to go to 64gb. So the memory prices change based on chip. M5 Max Chip starts with 48gb of memory.
raincole•1h ago
> M5 Pro supports up to 64GB of unified memory with up to 307GB/s of memory bandwidth, while M5 Max supports up to 128GB of unified memory with up to 614GB/s of memory bandwidth

Isn't this it?

wincy•35m ago
Ah yeah you’re right, thanks. I tried to at least make my post useful and pull up prices for the different tiers. Overall, those prices are surprisingly competitive now compared to the rest of the laptop market!
stetrain•1h ago
On the M5 Pro tier (not the base M5 tier that was released last November), the base memory is 24GB.

My M3 Pro from a few years ago for the same price had 18GB.

2OEH8eoCRo0•1h ago
Insane for the "Pro" to have only 16GB of memory. My 11 year old Intel i3 laptop has 16GB of memory.
detritus•36m ago
Don't these integrated ARM-based SoCs make much better use of RAM as opposed to old Intel-based boards? That's my understanding, anyway.
2OEH8eoCRo0•33m ago
The benefits are in speed not capacity.
wincy•32m ago
My wife’s 8GB MacBook Air crashed yesterday with Firefox and Find My open and nothing else because of running out of RAM, so, sort of, but they’re not magic. (Find My was using 3GB of memory!)
dawnerd•23m ago
More to do with the faster storage allowing you to swap without noticing it as much. There was this whole trend when m1 first came out of people saying it didn't matter if you got the lowest spec because the ssd was so fast it made up for the lack of ram... totally ignoring that swapping like that was destroying their drives really fast.
kylec•35m ago
Apple doesn't tend to use "RAM" in their marketing materials, they usually use "memory", which appears 9 times in the press release.
armsaw•14m ago
Preorders open tomorrow according to the store page. You can’t order the base RAM model today, either.
jansan•1h ago
The performance numbers are impressive, but I do not get the on-board AI spin. What is it used for?
boringg•1h ago
marketing.
layer8•1h ago
Image Playground
satvikpendem•48m ago
Local LLMs. Lots of people buy Macs due to their unified memory which obviates the need to buy a much more expensive GPU to get the same amount of VRAM.
alwillis•30m ago
If you’re working on something sensitive, you may not want to share it with OpenAI or Anthropic.

You can run open source models like Kimi K or Qwen locally. Apple recently updated Xcode 26.3 to support local models.

testfrequency•1h ago
I have a fairly maxed out M2 Ultra (24 cores, 192GB RAM), and still cannot get this machine to choke on anything.

I have not once felt the need to upgrade in years, and that’s with doing pretty demanding 3D and LLM work.

mikert89•1h ago
Yeah I have an M1 Max, and I really want to upgrade, but there’s no reason to.
Sharlin•1h ago
AI video generation can fairly easily choke anything that's not NVIDIA's flagship model. Even the latest local image gen models are so large that they can be frustratingly slow with non-optimal hardware even if they fit in the VRAM. IIRC when I had an M2, it was about 4x slower at running the venerable Stable Diffusion (and SDXL) than my meager RTX 3060.
testfrequency•1h ago
I do not do anything with AI Video, but I imagine running this locally would be a hog on a Mac - especially if not optimised for Metal.
carlosjobim•1h ago
You might have confused Hacker News with your e-mail inbox again. This is an Apple press release, directed to everybody in the world who might be interested in a new computer or their first computer.
testfrequency•1h ago
What’s with the attitude? My machine is aging like a fine wine, I’m acknowledging how resilient their custom silicon is despite the world demanding more and more compute.
carlosjobim•1h ago
It was a joke, should have put a smiley face. But every thread on a new Apple product here on HN have the same "why should I upgrade" comment, forgetting that there are people who might have very old devices they want to upgrade, or they might want to switch from Windows/Android to Apple.

Even if a new device is a small upgrade from last year's model, it can be a giant upgrade for other people.

testfrequency•1h ago
Got it. I guess it feels unfair to gaslight people who are celebrating not needing upgrades, anecdotally sharing their experiences - because some people just need a new computer for xyz reason in time.
_jab•1h ago
I've found current-generation Macs so capable that I've switched to using a Macbook Air. Would strongly recommend - it's still a powerful machine and it's significantly lighter and cheaper.
aurareturn•1h ago

   and that’s with doing pretty demanding 3D and LLM work.
It definitely chokes with larger models that can fit the 192GB of RAM. Prompt processing is a big bottleneck before M5.
magicalist•1h ago
> It definitely chokes with larger models that can fit the 192GB of RAM

M5 Max maxes out at 128GB, so that will have to wait for the eventual M5 Ultra anyways.

prodigycorp•1h ago
If there’s anything this past three years has taught me, it’s that modern cpus can performantly do every task except for streaming text over the internet.
Aurornis•1h ago
I have a powerful older Mac that doesn’t really “choke” on anything, but I could always use more speed.

The high memory Macs have been great for being able to run LLMs, but the prompt processing has always been on the slow side. The new AI acceleration in these should help with that.

There are also workloads like compiling code where I’ll take all the extra speed I can get. Every little bit of reduced cycle time helps me finish earlier in the day.

And then there’s gaming. I don’t game much, but the M1 and M2 era Apple Silicon feels sluggish relative to what I have on the nVidia side.

replwoacause•1h ago
Sounds pretty beefy. What kind of local LLM is that thing capable of running? Does it open up real alternatives to cloud providers like OpenAI and Claude, or are the local models this hardware is capable of running still pretty far behind?
butILoveLife•1h ago
>LLM work.

Doubt

I imagine you basically use online models exclusively, and occasionally try out local stuff.

Source: My fortune 20 company tried with M whatever, and the local llms were unusable.

satvikpendem•54m ago
Just because you don't usually use local models doesn't mean others don't, especially with their 192 GB of RAM.
pixelesque•1h ago
Interesting that they're showing VFX/CG software (Autodesk MAYA and Foundry Nuke) so prominently - obviously people using "Pro" machines are the target audience for this, but both of those apps (any many others in the industry) use Qt for the interface, rather than being totally platform-native.
klabb3•59m ago
Contrary to HN popular belief, there are neither incentives nor benefits to building native ui apps, for neither consumer nor professional apps. The exception is apps that only make sense on a single platform, such as window management and other deep integration. On iOS/macos you have a segment of indie/smaller apps that capture a niche market of powerusers for things like productivity apps. But the point is it makes no sense for anything from Slack, VSCode, Maya, DaVinci Resolve, and so on, to build native UIs. Even if they wanted to build and maintained 3 versions, advanced features aren’t always available in these frameworks. In the case of Windows, even MS has given up on their own tech, and have opted to launch webview based apps. Apple is slightly more principled.
trymas•36m ago
I am not an apple framework expert, but some things in apple ecosystem are nice.

CoreImage - GPU accelerated image processing out of the box;

ML/GPU frameworks - you can get built-in, on device's GPU running ML algorithms or do computations on GPU;

Accelerate - CPU vector computations;

Doing such things probably will force you to have platform specific implementations anyway. Though as you said - makes sense only in some niches.

NetMageSCW•13m ago
Strong disagree. I think Microsoft’s decision to wrap web apps for the desktop is one of the stupidest they have ever made. It provides poor user experience, uses more battery power and needs more memory and CPU to be performant and creates inconsistencies and wierd errors compared to native apps.
trymas•45m ago
Similar thoughts with first image of Capture One, when apple bought Pixelmator/Photomator a year ago.

I think I read somewhere long time ago that Capture One is also using Qt for GUI, though cannot find this anymore, so probably not true.

sarmike31•1h ago
An ”unrivaled experience” with MacOS Tahoe…
dirk94018•1h ago
On M4 Max 128GB we're seeing ~100 tok/s generation on a 30B parameter model in our from scratch inference engine. Very curious what the "4x faster LLM prompt processing" translates to in practice. Smallish, local 30B-70B inference is genuinely usable territory for real dev workflows, not just demos. Will require staying plugged in though.
butILoveLife•1h ago
>On M4 Max 128GB we're seeing ~100 tok/s generation

You mean on your first token. Whats the performance after 500 and 3000 tokens?

I genuinely don't understand why people post stuff like this. People are not informed enough to know you mean first tokens. They are going to make a mistake and buy one thinking they will get 100tk/s.

Are you working for Apple marketing? Do you have post purchase regret? I cannot imagine deliberately misleading people. Maybe you are hoping more buyers build up your ecosystem?

kamranjon•1h ago
I'm not sure if you're just unaware or purposefully dense. It's absolutely possible to get those numbers for certain models in a m4 max and it's averaged over many tokens, I was just getting 127tok/s for 700 token response on a 24b MoE model yesterday. I tend to use Qwen 3 Coder Next the most which is closer to 65 or 70 tok/s, but absolutely usable for dev work.

I think the truth is somewhere in the middle, many people don't realize just how performant (especially with MLX) some of these models have become on Mac hardware, and just how powerful the shared memory architecture they've built is, but also there is a lot of hype and misinformation on performance when compared to dedicated GPU's. It's a tradeoff between available memory and performance, but often it makes sense.

fooblaster•43m ago
what inference runtime are you using? You mentioned mlx but I didn't think anyone was using that for local llms
dirk94018•1h ago
For chat type interactions prefill is cached, prompt is processed at 400tk/s and generation is 100-107tk/s, it's quite snappy. Sure, for 130,000 tokens, processing documents it drops to, I think 60tk/s, but don't quote me on that. The larger point is that local LLMs are becoming useful, and they are getting smarter too.
macintux•56m ago
Please read the guidelines and consider moderating your tone. Hostility towards other commenters is strongly discouraged.
eknkc•1h ago
I find time to first token more important then tok/s generally as these models wait an ungodly amount of time before streaming results. It looks like the claims are true based on M5: https://www.macstories.net/stories/ipad-pro-m5-neural-benchm... so this might work great.
hu3•1h ago
What about real workloads? Because as context gets larger, these local LLMs aproxiate the useless end of the spectrum with regards to t/s.
Someone1234•57m ago
I strongly agree. People see local "GPT-4 level" responses, and get excited, which I totally get. But how quickly is the fall-off as the context size grows? Because if it cannot hold and reference a single source-code file in its context, the efficiency will absolutely crater.

That's actually the biggest growth area in LLMs, it is no longer about smart, it is about context windows (usable ones, note spec-sheet hypotheticals). Smart enough is mostly solved, combating larger problems is slowly improving with every major release (but there is no ceiling).

satvikpendem•56m ago
That should be covered by the harness rather than the LLM itself, no? Compaction and summarization should be able to allow the LLM to still run smoothly even on large contexts.
storus•1h ago
4x faster is about token prefill, i.e. the time to first token. It should be on par with DGX Spark there while being slightly faster than M4 for token generation. I.e. when you have long context, you don't need to wait 15 minutes, only 4 minutes.
fotcorn•51m ago
The memory bandwith on M4 Max is 546 GB/s, M5 Max is 614GB/s, so not a huge jump.

The new tensor cores, sorry, "Neural Accelerator" only really help with prompt preprocessing aka prefill, and not with token generation. Token generation is memory bound.

Hopefully the Ultra version (if it exists) has a bigger jump in memory bandwidth and maximum RAM.

anentropic•8m ago
Do any frameworks manage to use the neural engine cores for that?

Most stuff ends up running Metal -> GPU I thought

barumrho•27m ago
100 tok/s sounds pretty good. What do you get with 70B? With 128GB, you need quantization to fit 70B model, right?

Wondering if local LLM (for coding) is a realistic option, otherwise I wouldn't have to max out the RAM.

brtkwr•1h ago
Why doesn't this excite me anymore?
righthand•1h ago
Because it was always a vapid distraction from life.
neom•1h ago
For me going way back, it was exciting when I had to save a bit (but not too much!) for a new 512 DIMM, and when I opened the box and smelled the chip smell, put it in always worried I was going to fuck it up, and then computer literally felt faster that next boot...that was pretty fun!! Now it's like oh great $5k for a slab of stone that can do pretty much anything, neat. I still think computers are cool, just not particularly exciting.
replwoacause•1h ago
Me either. I guess it's just fatigue, at least for me. I also don't really get that excited by new LLM releases either. Not to say the tech isn't impressive, but I guess all the hype has me inured.
lm28469•54m ago
Because it's the same shit every year for the past 5 years with the M line. 2010 to 2015 was a major improvement, 2015 to 2020 was a major improvement, now they pretty much solved the computer/laptop problem for 99% of people. I'm on a 16gb m1 air, I see absolutely no reason to update.
satvikpendem•51m ago
Because the M1 was too good, a qualitative leap over previous Macs and really every other laptop and even some desktops back in 2020. Now, Apple Silicon is just iterative.
FBISurveillance•1h ago
Note: no power adapter included.
NetMageSCW•10m ago
Not true everywhere. Only where required by law, so complain to your government.
varispeed•1h ago
Only 128GB. I was hoping they'd do 256GB version. Disappointing.
kwanbix•1h ago
I wonder if it is good to just get one and run Linux on a VM. Would that work better than an x64? Anybody knows?
pbmonster•55m ago
Why would you want to do that? Do you like the hardware that much, and also that much more than just an M2 (soon M3) running Asahi?

Linux in a VM would work with the usual caveats. Periphery like the built-in webcam most likely won't work. Getting codecs and DRM to run will be pain and you'll be back to use macOS for that quickly (but that's just standard pain of ARM Linux).

pcurve•1h ago
$200 price bump across the board. The cheapest 16" is now $2699 and 14" Pro $2199. I think it's a fair price considering M2Pro 14" was $1999 (though it was discounted) only had 512GB and 16GB RAM.
SirMaster•52m ago
It's not $200 across the board. M4 MacBook Pro and M5 MacBook Pro started at $1599 with 512GB SSD.

Now it starts at $1699, a $100 bump but comes with a 1TB SSD. Previously it would have cost $1799 for the 1T SSD, so it's a $100 bump on base price but you are also getting 1TB SDD for $100 less than before.

pcurve•18m ago
To clarify, I meant, model with Pro chip, not just Macbook Pro name.

For example, up until MacBookPro M2, MacBookPro M2 came with M2 Pro chip.

However, starting with M3, Apple lowered the MacBookPro MSRP to $1599, but its base configuration was downgraded to M3 chip, not M3 Pro. To get the M3 Pro, you had to pay $1999. There's substantial performance between the two.

Same with M4. To get the M4 Pro chip, you had to pay $1999.

Now to get M5 Pro chip, it's $2199. Still a good value, but just saying it's a deviation from the trend.

mathverse•1h ago
Nano-texture is worth the upgrade if you are on a macbookpro whatever M<cpu> and dont have it.

For those of us with astigmatism it's really night and day experience.

napo•1h ago
I was considering it but got cold feet when I've been told that you could damage it when cleaning it. When I open/close my laptop I leave a ton of finger prints. I'm not too good with delicate hardware stuff.
NetMageSCW•11m ago
Why are you touching the screen when you open/close your laptop??? Do you close your car doors with the window?
DGAP•1h ago
$5k machine for developers to just run claude code while they browse Reddit.
hrmtst93837•59m ago
With an additional $200/month subscription from Anthropic, because they noticed that the Kimi K2.5 they were able to run on their M5 comes nowhere close to matching Opus 4.6.
mpalmer•1h ago
I'm done buying Macs until they prove they can ship an OS
manofmanysmiles•1h ago
I love the following section of their copy:

> Even More Value for Upgraders

> The new 14- and 16-inch MacBook Pro with M5 Pro and M5 Max mark a major leap for pro users. There’s never been a better time for customers to upgrade from a previous generation of MacBook Pro with Apple silicon or an Intel-based Mac.

I read as "Whoops we made the M1 Macbook pro too good, please upgrade!"

I think I will get another 2-5 years out my mine.

Apple: If you document the hardware enough for the Asahi team to deliver a polished Linux experiene, I'll buy one this year!

seanalltogether•58m ago
Same, in fact the only reason right now that I would upgrade my m1 pro is if they threaten to change the design by getting rid of the hdmi or sd card slot, or doing something stupid like when they added the touch bar. I was locked into my old intel pro for so long because of all the bad hardware choices they were making.
virgildotcodes•33m ago
You may get your wish with all the rumors of a touch screen on the M6 MBPs.
throwforfeds•17m ago
Love that they didn't learn anything from the touchbar.
satvikpendem•57m ago
I read it the same way. I should've gotten way more RAM back when I got my M1 and RAM was still cheap although this was of course before the LLM boom so there was no way to really know.
marpstar•6m ago
I maxed my M1 out when I bought it because I was frustrated with the 16GB max in the previous machines. I use my machine for all sorts of things and some days you just don't feel like exiting apps to make space for new ones.

I still don't have a strong urge to upgrade. I could probably get by on 32GB (like my work-issued machine is) but 64GB is the right amount of headroom for me.

jeanlucas•52m ago
Well, I just upgraded from Intel late last year. There are lots of users still on Intel :)
bsimpson•42m ago
There was a magical window at Google where you could be issued an iMac Pro 5k. (To this day, the standard issue monitor is still 1440p.)

~9 years later, there are a lot of people still using it as their main machine, waiting until we get kicked off the corp network for lack of software support.

dawnerd•21m ago
My 32gb m1 max was probably the best purchase I've made. Still plenty of headroom in performance left in this beast. Wonder what reason they'll use to end software support in the future. Bet it'll be some security hardware they make up for the sake of forcing upgrades.
hrmtst93837•1h ago
> M5 Pro supports up to 64GB of unified memory with up to 307GB/s of memory bandwidth, while M5 Max supports up to 128GB of unified memory with up to 614GB/s of memory bandwidth.

This is the important statement. 614GB/s is quite decent, however a NVIDIA RTX 5090 already offers 1,792 GB/s (roughly 3x) of memory bandwidth, for comparison.

lm28469•59m ago
> NVIDIA RTX 5090 already offers 1,792 GB/s

You can buy two m5 pro base model for the same price as a single 5090...

dylan604•28m ago
That's a fun comparison, but can you run those 2 m5 pros in parallel to accomplish 2x the work? Otherwise, you just told me you can buy 2 toyota corollas for the price of 1 F-150 while trying to convince me you can haul your boat behind both corollas at the same time.
lm28469•14m ago
You can also buy a 64gb mini, save $1k and do more work than what you could do with a single 5090.

In Europe I can get a 128gb mac studio m4 max for 300 euros more than a 5090 (for which you still need to buy a power supply, motherboard, cpu , &c.)

hrmtst93837•3m ago
But the inference on the mac studio m4 max will be slower than on the 5090, even though you can load larger models.
Someone1234•50m ago
You're right a $3600 graphics card is worse than a $2600 laptop; but from my perspectives they're very different products. Not least of all because even at $3600 for a RTX 5090 you still have the whole rest of the computer left to purchase.
asdhtjkujh•9m ago
I imagine the upcoming M5 Ultra will be competitive in this regard. The M3 Ultra already has 819GB/s and it's two generations behind.
pwython•1h ago
Well that's. Just. Great. I bought a 64GB M4 Max MBP last month. I'm past the 14-day return window. I figured the M5 was near, but assumed M5 Max would come a bit later. Not sure where I came up with that.
abiraja•25m ago
M5 has been out since last year, no?
rapfaria•25m ago
Not sure either since M5 base has been available for months now
dylan604•22m ago
This is always the gamble with buying a Mac. Either purchase right when the new is released, or be on the fence of your new becoming old a couple of weeks after purchase.
owenpalmer•1h ago
The screenshot of running LM Studio alongside Maya is a massive hardware flex.

Wish it was Blender though ;)

MagicMoonlight•1h ago
They’re giving us extra storage… but they’ve put the price up by 200, which is as much as they charged for the storage anyway.
NetMageSCW•5m ago
Why do you think the price went up by $200?
heurs•59m ago
Honest question. Is it possible to install an earlier version of macOS on these machines? Liquid glass looks so.. unprofessional to my eyes. And I hear it's also unstable.
adamtaylor_13•48m ago
That's a big part of what's keeping me from upgrading. Every time I look at my wife's iPhone I'm dumbfounded by just how bad the liquid glass looks.

It's the first time I've ever been so repulsed by a design that I actively avoid it just... out of sheer preference.

philistine•45m ago
I have a base M5 since last year. You cannot, no. It is literally impossible. Do with that what you will.
icambron•39m ago
It does look terrible, but I haven't found it to be unstable, personally
Hasz•32m ago
accessibility settings can turn off some (but not all) of the garish animations, transparencies, etc.
dmix•25m ago
You barely see any liquid glass on Tahoe. I keep my dock hidden and it's just the icons mostly which aren't that different than before.
user3939382•58m ago
And your native CLI tools will continue to be from 2011 with 0 attention paid to the dev experience until it’s Swift, and we’ll continue to lock you out of running programs from other human beings we didn’t approve without a 6 step ritual in the OS. Oh and all apps will continue to constantly phone home i.e. pay for the machine so Google Adobe and Microsoft can run updaters and telemetry on it all day.
cmdrmac•13m ago
Good point about the telemetry part. I've been using Little Snitch for the past few years and just block all the telemetry calls.
NetMageSCW•7m ago
Or don’t use Google, Adobe or Microsoft software if that bothers you? And how is that Apple’s fault?
justin66•58m ago
“An Unrivaled Experience with macOS Tahoe”
tamimio•56m ago
I will wait for the new mac mini instead
otterley•55m ago
I checked the fine print on the product website: by “up to 4x faster LLM prompt processing,” they’re specifically referring to time to first token. So it’s not about token generation rate (tokens per second).
jasonjmcghee•45m ago
It would probably be worth finding a more friendly way to market this, but it's a reasonable / accurate way to say it.

The prompt processing sped up.

Not the output generation.

M4 was notoriously slow at this compared to DGX etc.

MagicMoonlight•54m ago
You have to pay separately for the charger now. £99, what a bargain.
SirMaster•49m ago
Or just don't but an Apple charger? You can get a perfectly fine small 100W GAN USB-C charger for like $30 on Amazon.
NetMageSCW•8m ago
Since that is required by law, I suggest moving.
jwr•54m ago
I would probably upgrade my MacBook Pro at once, if it wasn't for the Tahoe disaster. Now, not so much, I'm inclined to wait until next year.
alexpham14•50m ago
Yeah, this feels like the annual “nice, but do I actually need it?” refresh if you’re already on an M4 Pro.
tristor•44m ago
I am very excited by this, but I am a bit dampened that the maximum memory available is 128GB. I was really hoping for 256GB, which would allow me to run frontier models locally. I think with 128GB it's still feasible to use this with something like Qwen3-Coder-Next and MiniMax-M2.5, but things like Kimi-K2.5 will require significant quantization to fit and model performance will really suffer.

I'm really wanting to build proper local-first AI workflows at home, and I think Apple has an opportunity to make that possible in a way other companies aren't really focused on, but we need significantly larger memory capabilities to do it, which I know is tough in the current memory market but should be available for a cost.

vardump•41m ago
Tell me about it. I checked the page thinking whether I should go for 256 GB or 512 GB RAM model.

128 GB maximum.

Sigh.

whywhywhywhy•42m ago
$5000 laptop you have to pay to add a power adapter… gratuitous penny pinching from Tim Cook's Apple.

It's one of those things, yes if I'm spending that much on a laptop I can afford to spend $80 on the adapter too, but does it feel good as a customer to do that or are you souring the experience of buying from you just to earn a few more dollars.

kylec•30m ago
I'm assuming you're in the EU or UK, Apple is required by law to not include a power adapter:

https://appleinsider.com/articles/25/10/15/eu-gets-what-it-a...

In the US they provide one in the box free of charge.

mort96•30m ago
This is one thing I don't really blame Apple for, and I think everyone else will follow suit -- and not just because Apple is doing it.

The EU requires that users must be able to buy a device without a charger. It's a huge supply chain challenge to add two variants of every single SKU, one with a charger and one without. So the obvious solution is to sell the charger separately, since you need that regardless, and always sell the device without a charger. You avoid having two variants of everything that way.

Now, you could maybe argue that Apple should default to bundle a charger with your laptop, so that you'd have to uncheck a "bundle charger" checkbox on their website. But do you really care whether your laptop costs $2200 and you can buy a charger for $60 or your laptop costs $2260 and you can save $60 by removing the charger?

You can make an argument that doing it Apple's way hides a price increase. And yeah, that's probably fair. But it's not like Apple is afraid of non-hidden price increases either.

wpm•19m ago
I have a huge tote box full of power bricks, most of them white Apple ones. I have a stack of 60-90W Apple USB-C ones too that I don't use cause they only have one port and are larger and worse than modern GaN units that can do 140W on one port while also pushing 30 or 60 on the others.

So, if you want one of mine, you can have one. On me. Because I'm fucking drowning in the things and appreciate not having to deal with another one.

roblh•37m ago
Kinda funny that the top image is capture one when Apple literally owns Photomator and gives you the option of bundling it when you buy.
jftuga•31m ago
I wonder how this compares to my M4 air with 10 GPU cores and 32 MB of RAM. My system can only run ~14B sized models at any reasonable speed. The accuracy of these sized models can be underwhelming. I am looking forward to a time when it would be nice to run models locally at a reasonable price.
abiraja•26m ago
I just bought a M5 Macbook Pro 2 weeks ago. Thinking of returning it and getting a M5 Pro with the same configuration but only $200 more. How should I compare M5 vs M5 Pro?
bob1029•19m ago
I feel like Apple pulled an Instant Pot with the M1 MacBook Pro. I still haven't had a single situation where I felt like spending more money would improve my experience. The battery is wearing out a bit, but it started out life with so much runtime that losing a few hours doesn't seem to matter.
post_break•15m ago
My M3 Pro with 18gb of ram still feels like a beast. The only thing I can make it suffer with so far is generating meshes from 3D scanning, and even then I'm just patient. Apple is suffering from success with these older laptops, it's a tough sell to upgrade, even from the M1 Max folks.
MBCook•13m ago
Can someone comment on the new dual die thing they’re promoting for how they make the M5 Pro and M5 Max chips?

How is that different from the silicon interposer they were using before?

The big change is the two dies don’t have to fabbed next to each other in a single wafer, which is fantastic for costs and yields. But would this affect the interconnect speed somehow?

How would the two be wired together?

Could this mean the Ultra comes back in M6 since it would be easier to fab?