Also there are countless reports of bricked M1 8GB MacBook Airs that are bricked because the SSD used up it's write cycles
It would be a surprise if more than 0.1% of Macbook Neo users have even heard of DuckDB.
Which means that this article is probably just riding the hype.
It’s staggering. Jaw dropping. Bandwidth is even worse, like 10000X markup.
Yet cloud is how we do things. There’s a generation or maybe two now of developers who know nothing but cloud SaaS.
I watched everyone fall for it in real time.
The tooling — K8S with all its YAML, Terraform, Docker, cloud CLI tools, etc. — is pretty hideously ugly and complicated. I watch people struggle to beat it into shape just like they did with sysadmin automation tools like Puppet and Chef a decade or more ago. We have not removed complexity, only moved it.
The auto scaling thing is a half truth. It can do this if you deploy correctly but the zero downtime promise is only true maybe half the time. It also does this at greatly inflated cost.
Today you can scale with bare metal. Nobody except huge companies physically racks anymore. Companies like Hetzner and DataPacket have APIs to bring boxes up. There’s a delay, but you solve that by a bit of over provisioning. Very very few companies have work loads that are so bursty and irregular that they need full limitless up and down scaling. That’s one of those niche problems everyone thinks they have.
The uptime promise is false in my experience. Cloud goes down for cluster upgrades and any myriad other reasons just as often as self managed stuff. I’ve seen serious unplanned outages with cloud too. I don’t have hard numbers but I would definitely wager that if cloud is better for uptime at all it’s not enough of an improvement to justify that gigantic markup.
For what cloud charges I should, as the deploying user, receive five nines without having to think about it ever. It does not deliver that, and it makes me think about it a lot with all the complexity.
The only technical promise it makes good on, and it does do this well, is not losing data. They’ve clearly put more thought into that than any other aspect of the internal architecture. But there’s other ways to not lose data that don’t require you to pay a 10X markup on compute and a 10000X markup on transfer.
I think the real selling point of cloud is blame.
When cloud goes down, it’s not your fault. You can blame the cloud provider.
IT people like it, and it’s usually not their money anyway. Companies like it. They’re paying through the nose for the ability to tell the customer that the outage is Amazon’s fault.
Cloud took over during the ZIRP era anyway when money was infinite. If you have growth raise more. COGS doesn’t matter.
Maybe cloud is ZIRPslop.
> Here's the thing: if you are running Big Data workloads on your laptop every day, you probably shouldn't get the MacBook Neo.
> All that said, if you run DuckDB in the cloud and primarily use your laptop as a client, this is a great device
That's not tldr, that's just subheader.
Or am I missing something?
I built multiple iOS apps and went through two start up acquisitions with my M1 MBA as my primary computer, as a developer. And the neo is better than the M1 MBA. I edited my 30-45 min long 4k race videos in FCP on that air just fine.
(Maybe the fans sometimes sound like they're a jet engine taking off…)
Finally just put an order in for a new 16" MBP M5 Max with 48GB memory only because it looks like they're going to stop supporting the Intel stuff this year and no more software updates. It'll probably be obsolete in six months with the rate things are going, but I've been averaging seven years between upgrades so it should be good!
So, the m5 with 48gb of ram will be amazing.
Those apps don’t need every single byte of memory you see in Activity Monitor to be active in RAM all of the time. The OS swaps out unused parts to the very fast SSD. If you push it so far that active pages are constantly being swapped out as apps compete then you start to notice, but the threshold for that is a lot higher than HN comments seem to think.
But... you can do the same exercise with a $350 windows thing. Everyone knows you can do "real dev work" on it, because "real dev work" isn't a performance case anymore, hasn't been for like a decade now, and anyone who says otherwise is just a snob wanting an excuse to expense a $4k designer fashion accessory.
IMHO the important questions to answer are business side: will this displace sales of $350 windows machines or not, and (critically) will it displace sales of $1.3k Airs?
HN always wants to talk about the technical stuff, but the technical stuff here isn't really interesting. The MacBook Neo is indeed the best laptop you can get for $6-700.
But that's a weird price point in the market right now, as it underperforms the $1k "business laptops" (to avoid cannibalizing Air sales) and sits well above the "value laptop" price range.
There is always a trade-off of cost/convenience/power, and some folks are going to end up the the Neo end of the spectrum.
My good old LG Gram (from 2017? 2015? don't even remember) already had 24 GB of RAM. That was 10 years ago.
A decade later I cannot see myself being a laptop with 1/3rd the mem.
If it didn't, Apple has other laptops today with more RAM.
That couldn't be more accurate
Their numbers are a bit outdated. M5 Macbook pro SSDs are literally 5x this speed. It's wild.
That's decently fast but not especially remarkable, most Gen4 NVMe drives can hit 6-7GB/sec.
https://www.apple.com/newsroom/2026/03/apple-introduces-macb...
"The new MacBook Pro delivers up to 2x faster read/write performance compared to the previous generation reaching speeds of up to 14.5GB/s..."
Those speeds on the Pro/Max are impressive though, more in line with Gen5 NVMe drives. Those have been available in desktops for some time but AFAIK the controllers are still much too power hungry for laptops, so I think Apple's custom controller is actually the first to practically hit those speeds on mobile.
I wish more companies would do showcases like this of what kind of load you can expect from commodity-ish hardware.
the laptop is gonna have some local code, maybe a lot, but if I'm doing legitimate "big data" that data is living i the cloud somewhere, and the laptop is just my interface.
Having said that duckDB is awesome. I recently ported a 20 year old Python app to modern Python. I made the backend swappable, polars or duckdb. Got a 40-80x speed improvement. Took 2 days.
:shrug: as to whether that makes the laptop or the giant instance the better place to do one's work…
Did a PoC on a AWS Lambda for data that was GZ'ed in a s3 bucket.
It was able to replace about 400 C# LoC with about 10 lines.
Amazing little bit of kit.
TutleCpt•1h ago
michalc•1h ago
I guess they’re using a different definition?
bcye•1h ago
rattray•1h ago
very much so…
jawns•1h ago
rrr_oh_man•1h ago
speedgoose•1h ago
speedgoose•1h ago
You have phones that are faster than cloud VMs of the past. You can use bare metal servers with up to 344 cores and 16TB of ram.
I used to share your definition too, but I now say that if it doesn’t open in Microsoft Excel, it’s big data.
Zambyte•1h ago
As you say, single machines can scale up incredibly far. That just means 16 TB datasets no longer demand big data solutions.
speedgoose•1h ago
Many people like to think they have big data, and you kinda have to agree with them if you want their money. At least in consulting.
Also you could go well beyond a 16TB dataset on a single machine. You assume that the whole uncompressed dataset has to fit in memory, but many workloads don’t need that.
How many people in the world have such big datasets to analyse within reasonable time?
Some people say extreme data.
brudgers•1h ago
Google has big data. You are not google.