Leaked unboxing video reveals unannounced M5 iPad Pro in full - https://9to5mac.com/2025/09/30/leaked-unboxing-video-reveals...
https://x.com/markgurman/status/1973048229932507518 | https://xcancel.com/markgurman/status/1973048229932507518
Exclusive! Unboxing the iPad Pro with the M5 before Apple! - https://www.youtube.com/watch?v=XnzkC2q-iGI
Big boy is bitching about a meager 10% increase in CPU and 30% increase in GPU as a nothing burger. "Who would upgrade from M4 to M5?" exactly. The difference is when you upgrade from older to latest. Most people do not upgrade annually. I'm looking to replace my 6th gen tablet, but now I might just get an m4 after the m5 is official and get a nice discount on what will be a helluva upgrade for me.
Some of the comments in the threads you linked also suggest Russia has infiltrated Apple, but my guess would be some where in the Chinese side of the supply chain.
[edit] typo
Single thread MacBook progression on Geekbench:
M1: 2350
M2: 2600
M3: 3100
M4: 3850
M5: 4400 (estimated)
TL:DR I expect a smaller MBP M4 to M5 pop compared to iPads M4 vs M5 because the latter are benefiting from new cooling tech.
The actual IPC increase and perf/clock of these chips excluding SME specific acceleration is MUCH smaller.
AMX has been present in every M series chip and the A series chips starting with the A13. If you are comparing M series chip scores in Geekbench 6 they are all using it, not just the latest ones.
Any app using Apple's Accelerate framework will take advantage of it.
But ultimately with a benchmark like Geekbench, you're trusting them to pick a weighting. Geekbench 6 is not any different in that regard to Geekbench 5 – it's not going to directly reflect every app you run.
I was really just pointing out that the idea that "no" apps use SME is wrong and therefore including it does not invalidate anything – it very well could speed up your apps, depending on what you use.
M1: 8350
M2: 9700
M3: 11650
M4: 14600
M5: 16650 (estimated)
This is assuming an 8% uplift as mentioned. Also nice.
M1 (any): 4P + 4E
M2 (any): 4P + 4E
M3 (any): 4P + 4E
M4 (iPad): 3P + 6E
M4 (Mac): 4P + 6E
M5 (iPad): 3P + 6E (claimed)
M5 (Mac): Unknown
It's worth noting there are often higher tier models that still don't earn the "Pro" moniker. E.g. there is a 4P + 8E variant of the iMac which is still marketed as just having a normal M4.Also, the size numbers are lies and aren't the actual size of anything.
The above summary also excludes the GPU, which seems to have gotten the most attention this generation (~+30%, even more in AI workloads).
single: 3960
multi: 22521Suppose I'm trying to decide whether to buy a 32-core system with a lower base clock or a 24-core system with a higher base clock. What good is it to tell me that both of them are the same speed as the 8-core system because they have the same boost clock and the "multi-core" benchmark doesn't actually use most of the cores?
Suppose I run many different kinds of applications and am just looking for an overall score to provide a general idea of how two machines compare with one another. That's supposed to be the purpose of these benchmarks, isn't it? But this one seems to be unusually useless at distinguishing between various machines with more than a small number of cores.
Your analysis is also incorrect for many of these systems. Each core may have its own L2 cache and each core complex may have its own L3, so systems with more core complexes don't inherently have more contention for caches because they also have more caches. Likewise, systems with more cores often also have more memory bandwidth, so the amount of bandwidth per core isn't inherently less than it is in systems with fewer cores, and in some cases it's actually more, e.g. a HEDT processor may have twice as many cores but four times as many memory channels.
But in your example, deciding between 24 cores with somewhat higher frequency or 32 cores with somewhat lower frequency based on some general-purpose benchmark is essentially pointless. The difference will be small enough that only the real application benchmark can tell you what you need to know. A general purpose benchmark will be no better than a coin toss, because the exact workings of the benchmark, the weightings of it's components into a score and the exact hardware you are running on will have interactions that will determine the decision to a far greater amount. You are right that there could be shared or separate caches, shared or separate memory channels. The benchmark might exercise those, or it might not. It might heat certain parts of the die more than others. It might just be the epitome of embarassingly parallel benchmarks, BogoMIPS, which is a loop executing NOPs. The predictive value of the general purpose benchmark is nil in those cases. The variability from the benchmark maker's choices will always necessarily introduce a bias and therefore a measurement uncertainty. And what you are trying to measure is usually smaller than that uncertainty. Therefore: No better than a coin toss.
And a benchmark can then provide a reasonable cross-section of different applications. Or it can yield scores that don't reflect real-world performance differences, implying that it's poorly designed.
Many of the systems claiming to have that CPU were actually VMs assigned random numbers of cores less than all of them. Moreover, VMs can list any CPU they want as long as the underlying hardware supports the same set of instructions, so unknown numbers of them could have been running on different physical hardware, including on systems that e.g. use Zen4c instead of Zen4 since they provide the same set of instructions.
If they're just taking all of those submissions and averaging them to get a combined score it's no wonder the results are nonsense. And VMs can claim to be non-server CPUs too:
https://browser.geekbench.com/v6/cpu/search?utf8=%E2%9C%93&q...
Are they actually averaging these into the results they show everyone?
https://browser.geekbench.com/v6/cpu/6807094
https://browser.geekbench.com/v6/cpu/9507365
The ones on actual hardware with lower scores typically have comments like "Core Performance Boost Off":
https://browser.geekbench.com/v6/cpu/1809232
And that's still a higher score than the one listed on the main page.
The only real distinction is between high end systems and low end systems, but that's exactly what a benchmark should be able to usefully compare because people want to know what a higher price tag would buy them.
Most people looking to optimize Epyc compile or render performance care about running inside VMs, all IO to SANs, assuming the is enough work you can yield to other jobs to increase throughput, and ideally near thermal equilibrium.
Faster hardware doesn’t exclusively make developers lazy, it also opens up capability.
Like, if I were buying a new workstation right now, I’d want to be shelling out $2000 so that I could get something like a Ryzen AI 395+ with 128GB of fast RAM for local AI, or an equivalent Mac Studio.
That’s definitely not because I’m “lazy,” it’s because I can’t run a decent model on a raspberry pi
I mention this because I'm guessing you might be using Desktop Docker which is kinda slow.
If you fail at these, you can even trash your SSD and need replacing the whole laptop due to it being soldered in.
I will say this - and most will not like this - that I'd go out and buy a M* MacBook if they still kept Boot Camp around and let me install Windows 11 ARM on it. I've heard Linux is pretty OK nowadays, but I have some... ideological differences with the staff behind Asahi and it is still a wonky hack that Apple can put their foot down on any day.
Benchmarking how long people could sit with a laptop on their lap while running Geekbench could be an interesting metric though.
Yes, they've done some nice things to get the performance and energy efficiency up, but it's not like they've got some magic bullet either. From what I've seen in reviews, Intel is not so far off with things like Ultra 7 258V. If they caught up to TSCM on the process node, they would probably match Apple too.
This was a reply to "never having had a computer that felt fast for so long".
For some tasks, this CPU with the GTX 970 still feels faster than MacBook M2 or recent Ryzen laptop APU.
Which is not to say that the Air is a bad device, its an amazing laptop (especially for the price, I have not seen a single Windows laptop with this build quality even at 2x price) and the performance is good - that if I was doing something like VSCode and node/frontend only it would be more than enough.
But also people here oversell its capabilities, if you need anything more CPU/Memory intensive PRO is a must, and the "Apple needs less ram because of the fast IO/memory" argument is a myth.
But even when I kill all processes and just run a build you can see lack of cores slow the build down enough that it is noticeable. Investing into a 48gb ram/pro version will definitely be worth it for the improved experience, I can get by in the meantime by working more on my desktop workstation.
A car that does this in 4 seconds is still fast (though twice as slow)
>>In the context of cars:
>>"Fast" refers to top speed. A fast car has a high maximum velocity. It can cover a great distance in a sustained manner once it reaches its peak speed. Think of the Bugatti Chiron or a Koenigsegg, which are famous for their incredibly high top speeds.
>>"Quick" refers to acceleration. A quick car can get from a standstill to a certain speed (often 0 to 60 mph or 0 to 100 km/h) in a very short amount of time. This is about how rapidly the car can change its velocity. Modern electric vehicles, like the Tesla Model S Plaid or the Lucid Air Sapphire, are prime examples of exceptionally quick cars due to the instant torque of their electric motors.
Moore's law was never about single threaded performance, it was about transistor count and transistor cost, but people misunderstood it when single threaded performance was increasing exponentially.
https://news.ycombinator.com/item?id=45434910
Your reply here is not directly related or an answer to the comment you replied to.
So I guess we've caught up with the desktop now.
Actually I assume we caught up awhile ago if I used the beefy multi core MX-Ultra variants they released, really just the base model has caught up. On the other hand I could have spent four times as much for twice as many cores on my desktop as well.
On the move laptops will always be a bit slow because all the tricks to save idle usage don't help much when you're actually putting them to work.
The only place I feel it is when I am running a local llm - I do get appreciably more tokens per second.
Compared to every Intel MBP I went through, where they would show their age after about 2-3 years, and every action/compile required more and more fans and throttling, the M1 is still a magical processor.
I've switched now to a desktop Linux, using an 8C/16T AMD Ryzen 7 9700X with 64GB. it's like night and day. but it is software related. Apple just slows everything down with their animations and UI patterns. Probably to nudge people to acquire faster newer hardware.
The change to Linux is a change in lifestyle, but it comes with a lot of freedom and options.
https://browser.geekbench.com/v6/cpu/compare/14173685?baseli...
About 10% faster for single-core and 16% faster for multi-core compared to the M4. The iPad M5 has the same number of cores and the same clock speed as the M4, but has increased the RAM from 8GB to 12GB.
M1 (16Gb)
M1 Pro (16Gb)
M2 Pro (16Gb)
M3 Pro (32Gb)
M4 Air (24Gb)
Currently switch between the M2 Pro and the M4 Air, and the Air is noticeable snappier in everyday tasks. The 17' M3 Pro is the faster machine, but I prefer not to lug it around all day, so it gets left home and occasionally used by the wife.
Not the 17 inch tall one they built and put on stage for the MacWorld keynote speech - that was in danger of being trod upon by a dwarf.
For $800 the M4 Air just seems like one of the best tech deals around.
Only if you don't mind macOS.
Still better than all the alternatives for someone like me that has to straddle clients expecting MS Office, gives me a *nix out of the box, and can run logic, reaper , MainStage.
Reaper has a native Linux client. Logic and MainStage... are you serious? :D
Windows on ARM performance is near native when run under macOS. `virtiofs` mounts aren't nearly as fast as native Linux filesystem access, but Colima/Lima can be very fast if you (for example) move build artifacts and dependency caches inside the VM.
See, that's where the MacOS shitshow begins: Parallels costs €189.99 and it looks like they are pushing towards subscriptions. I am not in the ecosystem, but Parallels is the only hypervisor I've ever seen recommended.
Another example is Little Snitch. Beloved and recommended Firewall. Just 59€! (IIRC, MacOS doesn't even respect user network configuration, when it comes to Apple services, e.g. bypassing VPN setups...)
Now, don't get me wrong, I am certain there are ways around it, but Apple people really need to introspect what it commonly means to run a frictionless MacOS. It's pretty ridiculous, especially coming from Linux.
I mean c'mon... paying for a firewall and hypervisor? Even running proprietary binaries for these kind of OS-level features seems moderately insane.
Except when you need something like UDP ports, for example. I tried it for 2-3 weeks, but I always encountered similar issues. At the end I just started to use custom Alpine VMs with UTM, and run Docker inside them. All networking configured with pf.
Don't get me wrong, I really admire what apple has done with the M CPUs, but I personally prefer the freedom of being able to install linux, bsd, windows, and even weirder OSes like Haiku.
No.
Even if it was better than lima (and the builtin posix/unix environment), which: it ain’t, it doesn’t nearly make a dent in the mandatory online account, copilot shit and all the rest.
If you like Windows, you’ll find it better with WSL2. In fact, I see many developers at my org who claim they’ll switch to Windows (from Mac) when we make it available internally.
However, if you love Mac, you'll never find Windows palpable no matter what.
And then there’s all shades of gray.
As long as you're ok being tethered to the wall, and even then, guzzling power.
The whole point of Apple Silicon is that its performance is exactly the same on battery as tethered to the wall AND it delivers that performance with unmatched power efficiency.
Its the same on pure desktop. Look at the performance per watt of the Mac Mini. Its just nuts how power efficient it is. Most people's monitors will use more power than the Mac Mini.
It'd be tempting if I had any idea what the software compatibility story would be like. For example, the company I'm contracting with now requires a device monitor for SOC2 compliance (ensuring OS patches are applied and hard drive encryption remains on). They don't even want to do it, but their customers won't work with them without it.
Surprise surprise, a quick check of the device monitor company's website shows they don't support ARM architecture devices at all.
I have the surface laptop 7 with the X elite in it. The only thing I've ran into that outright didn't run was MSSQL server.
It's not my main machine, that is still an M4 Macbook pro but I hop on it occasionally to keep up with what Windows is doing or if I need to help someone with something windows specific. I've got WSL2, Docker, VSCode, etc. all running just fine.
It's decent, but not amazing. Feels a little slower than my M2 Air I have but not much, most of that is probably just windows being windows.
Would be nice to be able to get Linux running on one of these
I wish Microsoft put more pressure on vendors to support ARM.
Which left me bitter quite honestly as I was looking forward to them a lot.
I wouldn’t be opposed to going back to Linux. But once you stop looking for power sockets all the time and start treating your laptop like a device you can just use all day at any moment, it’s hard to go back.
You really think an average person shopping for a computer at Bestbuy cares about installing a different OS on their machine?
I certainly don't think that matters to the vast majority of the population
Sorry, I don't get the reference. What sort of expenses are you referring to? For the price of a used car you can get pretty much any workstation money can buy.
There are certainly many more options on the PC side, but it's not because Apple actively blocks users from running another OS.
Once that’s done, any distro should be able to work.
I won't recommend my monitor because it has auto-dimming you can not turn off. Good but not great.
So they are about one generation behind? That's not bad really. What's the AMD or Intel chip equivalent to the M2 or M3? Is somebody making a fanless laptop with it?
I don't think the market is there for fanless non-Mac laptops. Most people would rather have a budget system (no $ for proper passive cooling) or more powerful system.
The low end of the market is for sure bigger but I think Apple has shown that the higher end can be profitable too. Dell, HP, Lenovo, and the other big laptop makers aren't afraid of having a thousand different SKUs. They could add one more for a higher end machine that's fanless.
I bought a MacBook Air because it was cheaper and met my needs. Being passively cooled was just a nice bonus.
This really sucks. The nice thing about high end (Mx Pro/Max) MBPs is that if you need desktop-like power, it's there, but they can also do a pretty good job pretending to be MacBook Airs and stretch that 100Wh battery far further than is possible with similarly powerful x86 laptops.
This affects ultraportables too, though. A MacBook Air performs well in bursts and only becomes limited in sustained tasks, but competing laptops don't even do burst very well and still need active cooling to boot.
On the desktop front I think AMD has been killing it but both companies need to start from scratch for laptops.
IMO Apple is killing it with the mac mini too. Obviously not if you're gaming (that has a lot to do with the OS though), but if you're OK with the OS, it's a powerhouse for the size, noise, and energy required.
You can hear the fan at full load, especially on the M4 Pro. I really wish Apple went with a larger case and fan for that chip, which would allow quieter cooling.
Also, many units are affected by idle (power supply) buzzing: https://discussions.apple.com/thread/255853533?sortBy=rank
The Mac Mini is quieter than a typical PC, but it's not literally silent like, say, a smartphone.
My Mac Mini M2 never does any noise, even when I run FFMpeg the fans don’t spike. It just gets slightly warmer. Still, unless I’m doing these high CPU bound activities, every time I touch it it’s cold as if it was turned off, which is very different than my previous Intel one that was always either warm or super hot.
Same here. I actually don't care for macOS much, and I'm one of those weirdos who actually likes Windows (with WSL).
I tried the surface laptop 7 with the snapdragon X elite, and it's..OK. Still spins up the fans quite a bit and runs hotter than my 14" M4 Pro. It's noticeably slower than the MacBook too, and isn't instant wake from sleep (though it's a lot better than Wintel laptops used to be).
So I've been on Apple Silicon macs for the last 4.5 years because there's just no other option out there that even comes close. I'm actually away from my desk a lot, battery life matters to me. I just want a laptop with great performance AND great battery life, silent, runs cool, high quality screen and touchpad, and decent speakers and microphone.
MacBooks are literally the only computer on the market that checks all boxes. Even if I wanted to/preferred to run Windows or Linux instead, I can't because there just isn't equivalent hardware out there.
I recently tried VirtualBox and it’s finally catching up, seems to work without any problems but I didn’t test it enough to find out the quirks.
* https://blogs.vmware.com/cloud-foundation/2024/11/11/vmware-...
You need to register an account/e-mail address for a free account:
* https://knowledge.broadcom.com/external/article?articleNumbe...
After which you can download VMware Fusion Pro and/or VMware Workstation Pro:
* https://knowledge.broadcom.com/external/article/368667/downl...
This seems to be a perpetual licence (?), so as long as it can run on the underlying OS you can continue to use it. Not sure if there's any 'phone home' functionality to track things (like has been seen with Oracle VirtualBox).
It may be the software problem as well. On Windows I regularly need to find which new app started to eat battery like crazy. Usually it ends up being something third-party related to hardware, like Alienware app constantly making WMI requests (high CPU usage of svchost.exe hosting a WMI provider, disabling Alienware service helped), Intel Killer Wi-Fi software doing something when I did not even know it was installed on my PC (disabling all related services helped), Dell apps doing something, MSI apps doing something... you get the idea.
It seems like a class of problems which you simply can't have on macOS because of closed ecosystem.
Without all this stuff my Intel 155H works pretty decently, although I'm sure it is far away from M-series in terms of performance.
Because the OS and apps running on it were already taking advantage of multithreading, making them efficiency core friendly was easy since devs only had to mark already-encapsulated tasks as eligible for running on efficiency cores, so adoption was quick and deep.
Meanwhile on Windows there are still piles of programs that have yet to enter the Core 2 Duo era, let alone advance any further.
Earlier. I did some multiprocessing work on an SMP PowerPC Mac in 1997.
Of course, but I assume you don't really need to install third-party apps to control hardware. In my case Alienware and Dell bloat came from me setting up an Alienware monitor. MSI bloat came from setting up MSI GPU. Intel Killer stuff just got automatically installed by Windows Update, it seems.
> Microsoft Defender
This one I immediately disable after Windows installation so no problems here :)
On work we get CrowdStrike Falcon, it seems pretty tame for now. Guess it depends on IT-controlled scan settings though.
1. https://learn.microsoft.com/en-us/defender-endpoint/mac-supp...
It's definitely becoming less easy over time. First you had to click approve in a dialog box, then you had to right-click -> open -> approve, now you have to attempt (and fail) to run the app -> then go into System Settings -> Security -> Approve.
I wanted to install a 3rd party kernel extension recently, and I had to reboot into the safety partition, and disable some portion of system integrity protection.
I don't think we're all that far from MacOS being as locked-down as iOS on the software installation front...
Microsoft is working towards this too. They wish so bad that they were Apple.
No wonder the Ferrari of computers if more efficient and effective than hobbled together junk yard monstrosity... ok, I'll be more generous... the Chrysler of computers.
I don't want to suggest that Apple is ideal with its soldered constrictions, or that modularity should be done away with, but reality is that it seems to me that standards need to be tightened down A LOT if the PC market really wants to compete. I for one have no problem not dealing with all the hassle of non-Apple products because I can afford it. If Apple got its botoxed, manicured head out of their rear ends and started offering their products at competitive prices, they would likely totally dominate the majority of computing market, which would likely atrophy and effectively die out over time.
Let's hope that Apple remains pretentious and sturdy greedy so that at least we have choice and it gives the PC sector at least a chance to get their standards in order, maybe even funding a gold standard functional linux distro that could at least hold water to MacOS without drooling all over itself.
From replacement parts and physical endurance perspective, I mean.
For a work machine, that’s pretty easy to justify.
Ignoring that though, if work machine means an Excel machine, then it's probably overspending IMO. If work machine means workstation, then you'd probably rather want one of the >1.6k models with more working memory… or just don't go Apple.
Probably need to get the batteries replaced somewhere past the 5 year mark, but otherwise the durability is unmatched.
A few months ago Spotify on an ancient Intel Mac mini in the living room started complaining that the new version of Spotify is no longer compatible with that Mac. Then I ran Open Core and updated the MacOS to a much newer version, and Spotify is happy. Now I’ll get even more years out of that machine.
Cheapest, I found was about 1000€. Buying a one-off offer from some random webshop, means you would have to deal with them in case of repairs or warranty issues.
And yeah, effectively max. 200GB non-upgradable SSD storage, certainly makes cheap offers likely, cause that's borderline unusable for almost everyone, who needs more than a web browser.
Intel, on the other hand, started a few generation ago with an edge in terms of efficiency, and now they're behind; they are definitely the one that fell asleep.
The fact that ARM may have unreachable efficiency doesn't mean that AMD, as x86 producer, is doing nothing.
365 29861
https://www.cpubenchmark.net/cpu.php?cpu=Apple+M4+10+Core&id... https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+AI+9+365&...
365: 2515/12552 M4: 3763/14694
https://browser.geekbench.com/processors/amd-ryzen-ai-9-365 https://browser.geekbench.com/v6/cpu/11020192
OTOH, Geekbench correlates (0.99) with SPEC standards, the industry standard in CPU benchmark and what enterprise companies such as AWS use to judge a CPU performance.
https://medium.com/silicon-reimagined/performance-delivered-...
https://news.ycombinator.com/item?id=43287208
The article in question doesn't mention subpar ARM optimizations.
It had always been both ways. This is why there exist(ed) quite a lot of people who have/had serious thoughts whether [some benchmark] actually measures the performance of the e.g. CPU or the quality of the compiler.
The "truce" that was adopted concerning these very heated discussions was that a great CPU is of much less value if programmers are incapable of making use of its power.
Examples that evidence the truth of this "truce" [pun intended]:
- Sega Saturn (very hard to make use of its power)
- PlayStation 3 (Cell processor)
- Intel Itanium, which (besides some other problems) needed a super-smart compiler (which never existed) so that programs could make use of its potential
- in the last years: claims that specific AMD GPUs are as fast as or even faster than NVidia GPUs (also: for the same cost) for GPGPU tasks. Possibly true, but CUDA makes it easier to make use of the power of the NVidia GPU.
Well, no such thing is possible. Memory access and branch prediction patterns are too dynamic for a compiler to be able to schedule basically anything ahead of time.
A JIT with a lot of instrumentation could do somewhat better, but it'd be very expensive.
https://www.cpu-monkey.com/en/compare_cpu-amd_ryzen_ai_9_365...
Edit: Looks like OP stealth edited "double" to "50%". Still lol.
It's interesting I've seen that often on trending posts. There is enough traffic that any variation of a comment will have readers
Not at all, you are stuck with a machine that has only 256GB of ssd(and not upgradeable) and 60hz LCD screen, also only 2 IO ports.
the M4 maybe the best mobile CPU, but that does not mean it will make every machine with it the best.
Not sure why 60Hz is a limitation, I've been on 60Hz for 20 years+, outside of gaming I do not see any value in going higher.
Re: refresh rate, it’s nice, but I wouldn’t miss it most of the time. I have my external monitor for my work M3 running at 120 Hz because I can, not because I need it.
Also a MacMini is not mobile, and even if I take the mini, it neither has a screen or keyboard. The thing is I don't need it WHILE I'm traveling, I need my laptop at the destination. And because that is less than 5% of the time I use it, there is no issue in carrying the external 2.5" HDD that I keep on my dock. You personal use-case may vary.
I've been running linux laptops with AMD/intel for years, and while some focus on more battery life would be welcome, the cpu never bothered me.
My primary limiation is available RAM (esp. when debugging react native against a complete local docker setup), which unfortunately on both AMD/Intel but far more so on Apple is usually limited to higher compute CPU's, which drives up cost of laptop (not even including the extra cost of RAM on Apple).
The only really locally CPU intensive processes I run on a laptop are Rust builds, and then still I would prioritize RAM over CPU if that would be possible, because Rust builds are fast enough, especially when incrementally compiling (and I almost never do lto release-like builds locally unless doing benchmarking or profiling work)
I love Rust but I think you might be the first person to say that!
Most of it can be explained away with TSMC. If you compared Apple's 5nm parts (M2), with AMD's 4nm parts, you'll see the performance swing in just about the same magnitude but in favor of AMD. M5 is 3rd gen 3nm.
It's a nice problem to have, since for most of computing history it's been the other way around. (Meaning the hardware was the constraint, not the OS.)
Sometimes though Youtube will make the iPad uncomfortably hot and consume the battery at an insane pace.
So, I guess there's someone using the performance.
In iPadOS 26, more extensive multi-window multitasking like Mac was added.
The quantity of windows you can keep open at once depends on your iPad’s SoC.
If you have a newer Pro iPad with more memory you can keep more of them open and slow down happens further down the rabbit hole.
The hardware is being pushed and used.
As another example, the iPad has pretty intensive legitimate professional content creation apps now (Logic Pro, Final Cut Pro, and others).
So there are people out there pushing the hardware, although I’ll join those who will say that it’s not for me specifically.
There is no particular reason a general purpose computer should be "not for me specifically" in terms of what you can do in software. In terms of design, sure. But not in terms of what you can do in software.
(I have a suspicion the same reason is responsible for why you basically don't find open source software on iOS devices the way you would on even Android or Windows; it doesn't make any money to take a cut out of.)
> It's a nice problem to have, since for most of computing history it's been the other way around. (Meaning the hardware was the constraint, not the OS.)
For anyone who works with (full-size) image or video processing, the hardware is still the constraint... Things like high-ISO noise reduction are a 20-second process for a single image.
I would be happy to have a laptop that was 10x as fast as my MacBook Pro.
What rumors have you seen? Anytime I've seen speculation, Apple execs seem to shut that idea down. Is there more evidence this is happening? If anything, Apple's recent moves to "macify" iPadOS indicate their strategy is to tempt people over into the locked down ecosystem, rather than bring the (more) open macOS to the iPad.
> @mingchikuo
> MacBook models will feature a touch panel for the first time, further blurring the line with the iPad. This shift appears to reflect Apple’s long-term observation of iPad user behavior, indicating that in certain scenarios, touch controls can enhance both productivity and the overall user experience.
> 1. The OLED MacBook Pro, expected to enter mass production by late 2026, will incorporate a touch panel using on-cell touch technology.
> 2. The more affordable MacBook model powered by an iPhone processor, slated for mass production in 4Q25, will not support a touch panel. Specifications for its second-generation version, anticipated in 2027, remain under discussion and could include touch support.
I don't understand the appeal, even a little bit. Reaching up to touch the screen is awkward, and every large touchpanel I've used has had to trade off antiglare coating effectiveness to accomodate oleophobic coating. For me, this would be an objective downgrade — the touch capability would never get used, but poor antiglare would be a constant thorn in my side. I can only hope that it's an option and not mandatory, and I may upgrade once the M5 generation releases (which is supposedly just a spec bump) as insurance.
And, in regards to smudges, I mean, just don't use the touchscreen unless you have to and problem avoided.
Antiglare can be a thing but that can be avoided by avoiding string lighting behind you.
FWIW, I often rotate my Samsung Galaxy Book 3 Pro 360 so that the screen is in portrait mode, then hold the laptop as if it's a book and use a stylus and touch on the display with my right hand, and operate the keyboard for modifiers/shortcuts with my left, or open it flat on a lapdesk.
They also said they weren’t merging iOS and macOS, and with every release that becomes more of a lie.
One article that talks about it: https://osxdaily.com/2025/09/19/why-im-holding-off-on-upgrad...
For less discerning users maybe the rough edges aren't that noticeable. But the point of choosing Apple products is you should be a discerning consumer.
the point i'm trying to make is that "apple consumers" are more critical.
There’s no good way to phrase a thought that is fundamentally flawed.
Neither of those things worry me personally, and I think the previous user calling it a “crappification” is still somewhat of an overreaction. Obviously from an accessibility standpoint transparency/legibility is important but as far as I’m aware tweaks are being made and these things can also be turned off or modified in accessibility settings.
There have been rumours of Apple wanting to shift Macs to ARM chips for 14 years. When they made that announcement, they already knew.
https://en.wikipedia.org/wiki/Mac_transition_to_Apple_silico...
It was obvious it was going to happen. I remember seeing Apple announcing iPads doing tasks my Mac at the time could only dream of and thinking they would surely do the switch.
> It turns out that people want windowing options on their large and expensive tablet, to do long-running tasks in the background
The problem isn’t them making iOS (or iPadOS) more like macOS, it’s them doing the reverse.
Yep, the ongoing convergence made that pretty clear. The emphatic "No" was to reassure 2018's macOS developers that they wouldn't need to rewrite their apps as xOS apps anytime soon, which was (and is) true 7 years later.
This is the same session where Craig said, "There are millions of iOS apps out there. We think some of them would look great on the Mac." and announced that Mohave would include xOS apps. Every developer there understood that, as time went on, they would be using more and more shared APIs and frameworks.
> The problem isn’t them making iOS (or iPadOS) more like macOS, it’s them doing the reverse.
That ship has sailed, but it's also completely overblown.
Speak for yourself. I for one despise the current direction of the Mac and the complete disregard for the (once good) Human Interface Guidelines. It’s everywhere on macOS now.
Simple example: The fugly switches which replaced checkboxes. Not only to they look wrong on the Mac, they’re less functional. With checkboxes you can click their text to toggle them; not so with the switches.
I’m not even going to touch on the Liquid Glass bugs, or I’d be writing a comment the length of the Iliad.
You'll be happy to know that checkboxes still exist and work like you'd expect. https://imgur.com/a/p2Xe1WL
Apple provides HIG guidance on switch vs. checkbox toggles here: https://developer.apple.com/design/human-interface-guideline... It boils down to, "Use the switch toggle style only in a list row".
The Apple Vision Pro was a far more extreme product and was kept pretty well under wraps. (tho a market failure).
Beta quality and expense without value are just some of the reasons for it's failure.
They've been doing exactly this since the first M1 MacBooks came out in 2020.
That's sort of the funny thing here. Apple's situation is almost the perfect inverse of Intel's. Intel fell completely off the wagon[1], but they did so at exactly the moment where the arc of innovation hit a wall and could do the least damage. They're merely bad, but are still selling plenty of chips and their devices work... just fine!
Apple, on the other hand, launched a shocking, world-beating product line that destroys its competition in basically all measurable ways into a market that... just doesn't care that much anymore. All the stuff we want to spend transistors on moved into the cloud. Games live on GPUs and not unified SOCs. A handful of AI nerds does not much of a market make.
And iOS... I mean, as mentioned what are you even going to do with all that? Even the comparatively-very-disappointing Pixel 10 (I haven't even upgrade my 9!) is still a totally great all-day phone with great features.
[1] As of right now, unless 18A rides in to save them, Intel's best process is almost five YEARS behind the industry leader's.
I really don’t like macOS but I’ve shifted to recommending Mac to all my friends and family given the battery, portability and, and speed.
Also, again, most folks just don't care. And of the remainder:
> Compiles code twice as fast as my new Windows desktop
That's because MS's filesystem layer has been garbage since NT was launched decades ago and they've never managed to catch up. Also if you're not apples/applesing and are measuring native C/C++ builds: VS is an OK optimizer but lags clang badly in build speed. The actual CPU is faster, but not by nearly 2x.
>That's because MS's filesystem layer has been garbage since NT was launched decades ago [...]
I confess that this kind of excuse drives me batty. End users don't buy CPUs and buy filesystems. They buy entire systems. "Well, it's not really that much faster, it's just that part of the system is junk. The rest is comparable!" That may be, but the end result for the person you're talking to is that their Windows PC compiles code at half the speed of their Mac. It's not like they bought it and selected the glacial filesystem, or even had a choice in the matter.
That's right up there with "my Intel integrated graphics gets lower FPS than my Nvidia card." "But the CPU is faster!" Possibly true, but still totally irrelevant if the rest of the system can't keep up.
At least historically for hardware components of PCs, this was not irrelevant, but the state of things:
You basically bought some PC as a starting basis. Because of the high speed of improvements, everybody knew that you would soon replace parts as you deemed feasible. If some component was not suitable anymore, you swapped it (upgrade the PC). You bought a new PC if things got insanely outdated, and updating was not worth the money anymore. With this new PC, the cycle of updating components started back from the beginning.
If performance is so critical, people do find ways around this. Just to give an arbitrary example since you mention file systems:
Oracle implemented their own filesystem (ASM Cluster File System (ACFS)):
> https://en.wikipedia.org/w/index.php?title=Oracle_Cloud_File...
"ACFS provides direct I/O for Oracle database I/O workloads. ACFS implements indirect I/O however for general purpose files that typically perform small I/O for better response time."
The use case was "compiling code". My assumption was that anyone buying hardware for that application would understand stuff like filesystem performance, system tuning, also stuff like "how to use a ramdisk" or "how to install Linux".
Yes, if you want to flame about the whole system experience: Apple's is the best, period. But not because they're twice as fast, that's ridiculous.
My mom is happily using a Lenovo from 2013 and looking to upgrade because it doesn't support Windows 11 and Win10 is running out of support. A contemporary Mac would have been the 2012 Mac Mini which would have received its final OS update with 10.15 Catalina in 2019, and would have received its final security update in 2022. (Desktop, so no difference in peripherals, etc.)
Incidentally, I actually purchased both the Lenovo and a 2012 Mac Mini (refurb) so I have the pricing data - the Lenovo was $540 and the Mac Mini was $730 - and then both took aftermarket memory and SSD upgrades.
A 500 laptop is probably more repairable and worst case you pay $500 to get a new one. Not to mention battery replacement etc.
The expected total cost of ownership is very high for a Mac. It’s like owning a Mercedes. Maybe you can afford to buy one, but you cannot afford maintenance.
Between work and personal, I’ve had an Intel Air, 2x Intel Pros, M1 Air, 2x M3 Pros, and an M4 Pro. My wife has an M1 Air. My in-laws have an M3 iMac. My mom has… some version of an Apple Silicon laptop.
That is a decent amount of computers stretching over many years. The only maintenance required has been the aforementioned keyboard.
In pre-college education, the answer is often "use any other junky Chromebook from anywhere in the world", which is cheaper still.
I did drop my watch last week, and the second hand fell off, though.
If that had happened to any other laptop I would be able either replace just the broken keycap, or just the keyboard.
And no, apple care+ that covers accidents is not cheap either at $150/year.
Also a lot of people prefer windows. It’s got a lot more applications than Mac. It has way more historical enterprise support and management capability as well. If you had a Mac at a big company 20 years ago the IT tooling was trash compared to windows. It’s probably still inferior to this day.
The Mac can (legally) run more software than any other computer. Aside from macOS software, there's a bunch of iOS and iPadOS software that you can run, and you can run a Windows, Linux, and Android software via VMs.
Let’s not forget that you’re now talking about buying a $100/year license; in just a few years you could buy a whole Windows computer with a permanent license for that money.
And if you’re going to talk about how great VMs are on Mac we can’t leave out how it’s the worst Docker/podman platform available.
And that's for macOS. For any other platgorm they actively prohibit any third party operating systems.
1. If you don't know what to do with it, why did you buy it?
2. If you wanted a general purpose computer, why did you buy an iPad?
3. Which iPadOS limitations are particularly painful for you?
It was mentioned, as almost a side comment somewhere, that the M chip is in there for multitasking and higher end image/video editing for "Pros". I could certainly use the M4 in an iPad Pro for iPadOS 26 and it's multitasking. I run into occasional slowness when multitasking on my M2 iPad Air.
Why an iPad? Android tablets have been... not great for a long time. The pencil is very handy, and the ecosystem has the best apps. Also, I know a few rather handy tricks Safari can do, such as exporting entire webpages as PDF after a full-screen screenshot, that are very useful to my workflow.
2. I already own multiple general purpose computers. They're not as convenient as an iPad. My ridiculously powerful PC or even my decent laptop doesn't allow the same workflow. However, that's not an intentional software limitation, it's a consequence of their form factor, so I can't hold Microsoft to blame. On the other hand,Apple could easily make an iPad equivalent to a MacBook by getting out of the way.
3. The inability/difficulty of side-loading apps, the restriction to a locked down store. Refusing to make an interface that would allow for laptop-equivalent usage with an external/Bluetooth m+k. You can use an external monitor, but a 13" screen should already be perfectly good if window management and M+K usage wasn't subpar. Macs and iPads have near identical chips (the differences between an M chip for either are minor), and just being able to run MacOs apps on device would be very handy. Apple has allowed for developer opt-out emulation of iOS and iPadOS apps on Mac for a while now, why not the other way around?
If not obvious from the fact that I'm commenting on HN, I would gain utility from terminal access, the ability to compile and run apps on device, a better filesystem etc. Apple doesn't allow x86 emulators, nor can I just install Proton or Wine. If I can't side-load on a whim, it's not a general purpose computer. I can't use a browser that isn't just reskinned Safari, which rules out a great deal of obvious utility. There are a whole host of possible classes of apps, such as a torrent manager, which are allowed on other platforms but not on iPadOS. It's bullshit.
My pc and laptop simply aren't as convenient for the things I need an iPad for, and they can't be. On the other hand, my iPad could easily do many things I rely on a PC for, if Apple would get out of the way. iPadOS 26 is a step in the right direction, but there's dozens left to go.
All I can say is: stay tuned.
Browser engine lock-in - no Firefox+uBlock Origin = me no buy. And yes, there is Orion, which can run uBlock, but it and Safari have horrible UI/UX.
Literally everything you do gets the full power of the chips. They finish tasks faster using less power than previous chips. They can then use smaller batteries and thinner devices. A higher ceiling on performance is only one aspect of an upgraded CPU. A lower floor on energy consumed per task is typically much more important for mobile devices.
You probably won’t notice this when using the new machine.
For me, it only becomes noticeable when I go back to something slower.
It’s easy to take the new speed as a given.
> What if I don't notice the difference between video playback consuming 20% of the chip's available compute and it consuming 10%?
You would notice it in increased battery life. A CPU that finishes the task faster and more efficiently will get back into low power mode quicker.
Apple already makes low cost versions of those, which are the previous models that they continue to manufacture.
Apparently there are references to it in macOS already.
I don't know if this already exists but it would be nice to see these added to benchmarks. Maybe it's possible to get Apple devices to do stable diffusion and related tech faster and just needs some incentives (winning benchmarks) for people to spend some effort. Otherwise though, my Apple Silicon is way slower than my consumer level NVidia Silicon
But newer chips might contain Neural Accelerator to close the gap a little bit (i.e. 10%??).
(I maintain https://apps.apple.com/us/app/draw-things-ai-generation/id64...)
This really reminds me of the 80/20 articles that made the frontpage yesterday. Just because a lot of HN users lament the fact that their 20% needs (can't run an LLM or compile large projects on an iPad) aren't met by an iPad doesn't mean that most people's needs can't be satisfied in a walled garden. The tablet form factor really is superior for a number of creative tasks where you can be both "hands on" with your work and "untethered". Nomad Sculpt in particular just feels like magic to me, with an Apple Pencil it's almost like being back in my high school pottery class without getting my hands dirty. And a lot of the time when you're doing creative work you're not necessarily doing a lot of tabbing back and forth, being able to float reference material over the top of your workspace is enough.
At this point Apple still recognizes that there is a large enough audience to keep selling MacBooks that are still general purpose computing devices to people who need them. Given their recent missteps in software, time will tell if they continue to recognize that need.
Downloading 15 different paid or free-with-in-app-purchases or free-with-ads apps to see which one actually does what it's supposed to do is one of those workarounds. I've learned how to do it and done it a bunch of times and I don't really like it. I much prefer the macOS/Windows/Linux workflow where there's typically some established, community run and trustworthy FOSS software to do whatever conversion you need.
Yeah, that took a long time for MS to get to not suck after Windows 8, but touch and tablet interactions on Windows 10 and Windows 11 work perfectly well.
WTH?? This is the first I am hearing this nonsense. Yet another reason why I won't get an iPad even though I am all in on Apple's ecosystem. It seems that Apple sees iPad users as the bottom feeders ripe for exploitation.
Assertions like this are what kill the iPad. Yes, DAWs "exist" but can only load the shitty AUs that Apple supports on the App Store. Professional plugins like Spectrasonics or U-He won't run on the iPad, only the Mac. CAD software "runs" but only supports the most basic parametric modeling. You're going to get your Macbook or Wintel machine to run your engineering workloads if that's your profession. Not because the iPad can't do these things, but because Apple recognizes that they can double their sales gimping good hardware. No such limitations exist on, say, the Surface lineup. It's wholly artificial.
I'm reminded of Damon Albarn's album The Fall - which he allegedly recorded on an iPad. It's far-and-away his least professional release, and there's no indication he ever returned to iOS for another album. Much like the iPad itself, The Fall is an enshrined gimmick fighting for recognition in a bibliography of genuinely important releases. Apple engineers aren't designing the next unibody Mac chassis on an iPad. They're not mixing, mastering and color-grading their advertisements on an iPad. God help them if they're shooting any footage with the dogshit 12MP camera they put on those things. iPads do nothing particularly well, which is acceptable for moseying around the web and playing Angry Birds but literally untenable in any industry with cutting-edge, creative or competitive software demands. Ask the pros.
It's easy to blame "Apple greedy" but optimizing either device to support an alternate input method degrades both. Apple is (supposed to be) all about a "polished" experience so this doesn't mesh with their design ethos. Any time I have seen a desktop environment get optimized for touch, people complain about it degrading the desktop experience. MacOS isn't even there yet and people are already complaining.
There are plenty of good AUs on the App Store (to name a few: DM10, Sonobus, the recent AudioKit modeled synths), but yes the selection of AUs on desktop is far greater. Most AU developers aren't going to pay the developer fee and go through the effort of developing, again, an entirely separate user interface, not to mention go through the app store approval process, to target a smaller market. It's a matter of familiarity. Just because your workflow depends on products that don't exist on iPad, doesn't mean that someone else's workflow isn't entirely productive without it. The entire industry is built on path dependence, so it's no wonder that software that has codebases that span decades and depend on backwards compatibility, i.e. the music production and CAD software, are not finding a lot of competition in the mobile space. Apple isn't designing their next unibody Mac chassis on the iPad, but that doesn't mean that a small business that makes 3D printed widgets isn't going to be happy using Onshape.
To be clear: I don't think an iPad is a _substitute_ for a desktop machine in most professional workflows. Partially due to path-dependence, and partially due to the greater information density that a desktop environment affords. But there are some workflows where the iPad feels like a much more _natural_ interface for the task at hand, and then that task can be transferred to the desktop machine for the parts where it isn't.
It blew the doors off every other laptop and desktop I've had (before or since).
When I think back to how quickly obsolete things became when I was younger (ie 90s and 00s), today's devices seem to last forever.
Look at glassy UIs. Worth it.
Did everyone forget that these chips started in general purpose MacBooks and were later put in the iPad?
If general purpose computing is the goal you can get a cheap Mac Mini
I've been hoping there were enough people like me that a third party would make a replacement but that never happened.
I know my current iPad Pro won't last forever so I suppose I'll end up with a Magic Keyboard setup eventually.
Edit with link: https://security.apple.com/blog/memory-integrity-enforcement...
Single-Core Score 4133 3748 110.3%
Multi-Core Score 15437 13324 115.9%
Same maximum clock speed. So assuming no special thermals solution on the new iPad Pro such as vapour chamber. This is 10% pure IPC improvements although the M5 has 6MB L2 Cache. 2MB higher than M4.
Not shown here are the E-Core performance. Which if we were to trust the A19 Pro test they are 20 to 30% higher than previous generations. And GPU is also a lot faster on A19 Pro.
M5 also comes with 12GB Memory as baseline, which is 4GB higher than M4 you get on iPad Pro. I hope M5 MacBook Air continues to get 16GB as baseline, looking like a very decent upgrade for anyone that is on M1 or still on Intel platform. Would be perfect if MacBook Air gets Vapour Chamber like iPhone Pro. I don't mind paying slightly more for it.
[1] https://browser.geekbench.com/v6/cpu/compare/14173685?baseli...
My friend with an M3 MacBook was complaining about the speed. I told them that was ridiculous and they must be doing something incredibly intensive. I came and took a look at it - I know chrome tabs are a memory hog but my God this thing slowed to a crawl with even lightweight usage despite being weeks old. I told him to return it immediately.
https://github.com/hmarr/vitals
Stats is another good one too:
I know it sounds a lot like vibes, because it kind of is, but it’s just what we’ve seen.
Wouldn’t it be easier to use Firefox or Safari? Chrome is a hog but it’s not like we don’t have multiple great alternatives which also use something like half of the battery and don’t oppose privacy measures.
Given the field they are in, I imagine they use chrome like many do for compatibility/testing reasons
The fastest way to get more memory is an ad blocker.
Edit: Let me double down, macOS 26 is the worst OS that Apple has shipped in the last two decades.
Bootloader is unlocked on Macs. That's how Asahi Linux started
Apple isn't the only company who can do this, but the reason they'll continue to have the lead for the foreseeable future is for all the reasons you dislike them. This is the benefit of near-complete vertical integration.
All the premier modal providers are losing money hand over fist even at $100's a month in subscription fees.
M5 Ultra/M5 Extreme/M5 Super?
The Extreme versions were only rumors. But maybe Apple will finally make one with the M5 and release a proper Apple Silicon Mac Pro.
which is the same thing that people said about the m3
No joke, if I could run my Steam library on my phone, I'd probably buy a new phone every year (and might need to, given what the thermals and rapid charge/discharge cycles would do to battery longevity). But Apple's current strategy is to provide a tool, then let developers do the work themselves; compare to Valve's efforts (and occasionally stepping on rakes when games update themselves).
As far as I can tell, the main issue for putting CrossOver on iOS is a lack of API support and an inability for iOS software to start new processes. AltStore and emulators on iOS are exciting, and with iPadOS and MacOS becoming increasingly similar, I hope to see someone give WINE on iOS a shot.
I would love to see a world where I can play my Steam library on my iPad or my iPhone, considering the wild amount of performance they can output, but the limitations of iOS make it very difficult or likely impossible.
They don’t care that much about the 30% (which was the best deal ever for a phone app store when it came out) except how the App Store sells hardware.
If only the platform was open enough that developers had real access, Apple might get away with like you say not providing first party support for gaming.
These are especially great on the various Android based gaming handhelds that are out now with Snapdragon 8 Gen 3 and similar SoC's in them, but it works on a phone too
You can already do this with tools like Winlator of course, but Valve's performance patches would probably make the whole process a lot easier to get working easily.
Any such feature would come to Apple hardware last because of Apple's arbitrary software limitations (maybe it'll work in the EU?), of course, but once Proton goes full ARM, it's only a matter of time.
Not much else I can think of as well.
M1 is still insane. Apps, OS emulation... just chuggs along.
Outside of that though, it’s really hard to tell the difference. M1 is/was a beast and plenty fast enough for daily work.
The new project leadership team made the decision to prioritize getting their existing work upstreamed into the Linux kernel, before working on supporting newer SOCs.
It's been going well.
> We are pleased to announce that our graphics driver userspace API (uAPI) has been merged into the Linux kernel. This major milestone allows us to finally enable OpenGL, OpenCL and Vulkan support for Apple Silicon in upstream Mesa. This is the only time a graphics driver’s uAPI has been merged into the kernel independent of the driver itself, which was kindly allowed by the kernel graphics subsystem (DRM) maintainers to facilitate upstream Mesa enablement while the required Rust abstractions make their way upstream. We are grateful for this one-off exception, made possible with close collaboration with the kernel community.
https://asahilinux.org/2025/05/progress-report-6-15/
Alyssa didn't abandon the project, she completed it.
Are you under the impression that Linux is about to abandon OpenGL and Vulkan in favor of a new graphics API that only Alyssa could possibly implement?
Marcan already provided the tools needed to capture the data being passed back and forth between MacOS and the GPU, so you can see exactly what the newer versions of the SOC are doing that is different.
> We are pleased to announce that our graphics driver userspace API (uAPI) has been merged into the Linux kernel. This major milestone allows us to finally enable OpenGL, OpenCL and Vulkan support for Apple Silicon in upstream Mesa.
Guidance was provided...
> Maintainers who want to be involved in the Rust side can be involved in it, and by being involved with it, they will have some say in what the Rust bindings look like. They basically become the maintainers of the Rust interfaces too.
But maintainers who are taking the "I don't want to deal with Rust" option also then basically will obviously not have to bother with the Rust bindings - but as a result they also won't have any say on what goes on on the Rust side.
So when you change the C interfaces, the Rust people will have to deal with the fallout, and will have to fix the Rust bindings. That's kind of the promise here: there's that "wall of protection" around C developers that don't want to deal with Rust issues in the promise that they don't have to deal with Rust.
But that "wall of protection" basically goes both ways. If you don't want to deal with the Rust code, you get no say on the Rust code.
Put another way: the "nobody is forced to deal with Rust" does not imply "everybody is allowed to veto any Rust code".
https://lore.kernel.org/lkml/CAHk-=wgLbz1Bm8QhmJ4dJGSmTuV5w_...
To get anything done in the Linux mailing list you need iron will.
> With Linux 6.16 now out in the wild, it’s time for yet another progress report! As we mentioned last time, the Asahi and Honeykrisp Mesa drivers have finally found their way upstream. This has resulted in a flurry of GPU-related work, so let’s start there.
Seems that they are doing pretty well upstreaming their work.
Apple already provides the translation layer to convert from DirectX 11 or 12 to Metal that Wine uses on Macs.
https://wccftech.com/apple-game-porting-toolkit-2-supports-a...
Proton does the exact same thing, only it translates DirectX into the Graphics API that Wine on Linux uses.
The new thing is that the M5 versions of the GPU cores picked up a 40% performance boost, on the version that just shipped on the new iPhones.
To registered developers, with no upstream or downstream support whatsoever.
> Proton does the exact same thing
For everyone, with upstream and downstream vendoring.
It's quite exciting to have two competing standards like this, it really makes you wonder which one developers will side with.
It's open source, you don't have to be registered with anything.
Wine already uses it.
Buy the commercial version of Wine for Mac, and you get end user support.
> It's quite exciting to have two competing standards like this,
Wine is the single open standard here.
> Being a fork of Wine, Proton maintains very similar compatibility with Windows applications as its upstream counterpart... Proton generally lags behind its upstream Wine base by several releases.
https://www.wikipedia.org/wiki/Proton_(software)
Apple and Valve are just providing layers that translate graphics API calls from the Windows standard DirectX API to Metal on Mac or Vulkan on Linux that Wine can use to support games on those platforms.
However, Proton lags behind on features available in the newer versions of upstream Wine.
A huge number of them were Iranian.
But Anandtech had articles as far back as the A12 7 years ago where it was competing with the intel chips of the era
A secondhand link because anandtech is restructured now unfortunately https://appleinsider.com/articles/18/10/05/apples-a12-bionic...
…because he left to work on the chips.
Apple is a vertical integration company and the CPU/GPU in their devices was a clear sore spot of relying on external vendors they did not like.
You can see their lead continuing and growing with things like their new modem used in some iPhones
Single Core: ~12% (3679 vs 4133)
Multi Core: ~15% (13420 vs 15437)
Which is in alignment historically with improvements from a new node improvement.https://browser.geekbench.com/ios_devices/ipad-pro-13-inch-m...
AFAIK the M5 is still 3nm being produced at the TSMC N3P node.
From Wikipedia: Node name Gate pitch Metal pitch Year 5 nm 51 nm 30 nm 2020 3 nm 48 nm 24 nm 2022 2 nm 45 nm 20 nm 2025 1 nm 40 nm 16 nm 2027
Also Apple is not shy of odd numbered cores. iPad 2 had a tri-core GPU for example.
https://en.wikichip.org/wiki/chip_multiprocessor
Lots of "odd" counts by your two criteria (though not many odd counts).
But bear in mind that Geekbench runs very short benchmarks for single core especially, so that the CPU never starts thermal throttling.
The Apple chips are fast and efficient, and "feel" even faster because of the their single core burst performance and on chip very fast RAM.
Chart below is the aggregated result from CPU-monkey, Geekerwan's chip analysis, devices of my own and various other reports.
Apple M1 series 3.2GHz 5W: ~115 Apple M2 series 3.5GHz 5W: ~120 Apple M3 series 4GHz 7.5W: ~140 Apple M4 series 4.4GHz 7.5W: ~170
Snapdragon X1E 4.3GHz 10-20W: ~140 Snapdragon X2E 5GHz >20W: ~160
AMD 9950X 5.7GHz (Desktop Best): ~140 AMD AI 9 HX 375 5.1GHz (Laptop Best): ~125
Intel Ultra 9 285K 5.7GHz (Desktop Best): ~145 Intel Ultra 9 285HX 5.4GHz (Laptop Best): ~135 Intel Ultra 9 288V 5.1GHz (Low Power Laptop Best): ~125
Apple M5 may be the first ever CPU to near ~200pts on Cinebench single-core while still maintaining less than 10W of core power draw. Competitors lose on all fronts by about 2 or even 3 generations at their respective device class.
For example, the M4 does get around 170 single, but the Snapdragon X2E gets just under 2000 for multi, over double what the M4 scores. If your application is relevant to Cinebench, the X2E is a better CPU for that. To match the X2E you need to go up to the 16 cores M4 Max.
The 16 cores variant of M4 Max is only available on a 16inch MBP that starts at 4.8K€; it's not clear how much the X2E laptops will cost but I would bet a lot of money that it's going to be much less than that...
As for the desktop's parts, the only Apple product that beats the typical X86 CPUs in multi, is actually the M3 Ultra which is pretty bad deal because it doesn't scale very well in other ways (GPU). Otherwise, Intel (i9s/Core Ultra 9) and AMD (Ryzen 9) still hold the crowns in pure multicore score.
The score of an M4 Max 16 cores, actually puts you down in Core Ultra 7 265K territory. You can put together a system based around that CPU and a 5070Ti GPU (that raw bench around the same as the M4 Max 40 cores variant but will actually perform much better for most things) for a full 1200€ less than a Mac Studio equipped like that (it even has the luxury of double the storage). If you don't need dedicated GPU power or could do with a low-end GPU the savings would be between 2000-1700€ (the 5070Ti is an 800€ part).
Let's be real, the Apple Silicon SoCs are very power efficient but they are definitely not performance maxing and they are even less money efficient. It is very suspicious arguing about top performance while ignoring multicore.
Now here is another fact: the M4 Max 16 cores can draw more power than the 140W power adapter included with the 16-inch MPB. It has a battery capacity of 100Wh. If you run the things at full tilt or near that, it will actually have a runtime of less than an hour. It's actually funny because the Apple afficionados keep singing the praise of Apple Silicon and many have been burned by that unexpected fact. It's easy to shit on the high-power gamer type laptop that can't run well on battery but that's actually true as well if you use the full power of an Apple Silicon laptop. You might get like half an hour more runtime but that's basically irrelevant.
The reality is that everyone singing that Apple Silicon efficiency praise don't have truly demanding workloads otherwise the battery life wouldn't be a meaningful difference.
High performance laptops don't make a lot of sense whether they are Apples or other brands. They are mostly convenience products (for docking/undocking) or status symbols.
And you put out a long long long post to point out what everyone understands: putting more cores in and running at a lower frequency would yield better efficiency at full load… that’s why we got to the point in today’s x86 laptops, a single core running at full speed already exceeds sustained multicore power target (28-60w depend on device class) because Intel and AMD has no other way to up the performance other than adding more cores.
But I disagree that only privacy conscious enthusiasts will want to run locally in the end. Right now in the hype froth and bubble and while SOTA advances fast enough that getting the very latest hosted model is super compelling, nobody cares about anything. Longer term, especially once hosted services start deciding to try and make real money (read: ads, tracking, data mining, etc) this is going to change a lot. If you can get close to the same performance locally without data security issues, ads or expensive subscriptions, I think it will be very popular. And Apple is almost uniquely positioned to exploit this once the pendulum swings back that way.
For me cool and silent device is more important than raw power, and we are in moment where the CPU is not that important (more so - memory)
Is there even a way to automatically upgrade to a major macOS version? Automatic update settings are only for minor/patch and security responses.
At least Apple is still somewhat deferential to the user experience.
I'm not sure if they fixed their newer GPUs but they really ought to.
Essentially they can't run most PC games/ graphics software made in the past decade
Before you do that really research the exact model you plan on buying, and how much support it truly has in Linux. Qualcomm has not exactly lived up to their own hyped intentions of good support on Linux, and that provides a foundation of sand for hardware vendors to further drop the ball on when it comes to Linux support.
Apparently, some models built on the Elite platform have reportedly great support, while others don't. My experience with Linux on ARM SoCs, outside of servers, has made me very wary about using consumer products for desktop Linux with them. You really don't want to be dependent on the vendor, or some random people in their free time, for their custom kernel forks or Linux images, and that seems to be the trend in the non-server ARM boards + Linux ecosystem.
Is this the reason, why the Android echosystem is so abysmal when it comes to hardware driver support?
There are some ARM consumer computers that implement UEFI, etc, though.
The issue is that it's cheaper to just put together bespoke boards and not implement industry standards that are expected on PCs. Vendors don't have to worry about supporting anything other than the hardware and images they ship, so they don't see a reason to make third party OS support viable.
> Is this the reason, why the Android echosystem is so abysmal when it comes to hardware driver support?
Theoretically, Android should have all of the same drivers as Linux, along with Android-specific drivers that are abstracted over stable driver interface instead of the unstable Linux interface.
What usually happens is the plethora of drivers that vanilla Linux ships with aren't built for, and distributed with, phones/tablets, and the Android drivers only have to support that specific board's hardware. Then, over time, the kernel doesn't get updated so new hardware won't be supported the longer you use the device.
If you're asking why Android devices have poor support for Linux, the OP's answer is the exact reason why.
Apple has their “Afterburner” card for ProRes media encoding, you could add even more ports, or there’s probably weird AV interface cards, but the vast majority of people can save a few thousand dollars and get a Mac Studio instead.
Since the Mac Studio has more than 10 customers it gets updated more frequently.
There were rumors about the Mac Pro getting a higher tier of “we stuck twice as many cores together in the SoC” but it didn’t pan out, likely not worth the development time compared to the higher volume products. But it could hypothetically still happen.
therefore I expect that mac pro (and in similar vein mac studio) will be repositioned as ai/ml dev machine, with apple leaning into their lucky strike of UMA fit with modern requirements.
my bet is m5 extreme exclusive to mac pro and 1 tb possibly even 2 tb ram, and mac studio limited to m5 ultra and 1 tb ram on the high ends.
but thats not based on rumors or "news" of any sort, just from logic extrapolated if i were in apple shoes
https://www.notebookcheck.net/Lenovo-IdeaCentre-Mini-x-debut...
The Elite X2 is near release and supposedly on par with these M5 numbers:
https://www.notebookcheck.net/Apple-M5-9-Cores-Processor-Ben...
It also has hardware AV1 encoding support, so depending on your workload, it can be significantly faster.
A server? Sure, I guess.
Apple seriously need to open up development on/for iPadOS, or it'll become something you by every decade or so. Who needs new model if people can't even utilize what they have right now?
aurareturn•4mo ago