What is the AI PC platform? The experience on windows with windows 11 for just the basic UI of the start menu leaves a lot to be desired, is copilot adoption on windows that popular and does it take advantage of this AI PC platform?
Ryzen AI 400 mobile CPU chips are also releasing soon (though RocM is still blah I think)
Nvidia is still playing in the AI space despite all the noise of others on their AI offerings - and despite intel hype, Nividias margins at least recently have been incredible (ie, people still using them) so their platform hasn't yet been killed by intel's "most widely adoptoped" AI platform offering
>Series 3 will be the most broadly adopted and globally available AI PC platform Intel has ever delivered.
The true competitor is Ryzen AI, Nvidia doesn't produce these integrated CPU/GPU/AI products in the PC segment at all.
What actually makes it an AI platform? Some tight integration of an intel ARC GPU, similar to the Apple M series processors?
They claim 2-5x performance for soem AI workloads. But aren't they still limited by memory? The same limitation as always in consumer hardware?
I don't think it matters much if you're limited by a nvidia gpu with ~max 16gb or some new intel processor with similar memory.
Nice to have more options though. Kinda wish the intel arc gpu would be developed into an alternative for self hosted LLMs. 70b models can be quite good but still difficult / slow to use self-hosted.
The latest Ryzen mobile CPU line didn't improve performance compared to its predecessor (the integrated GPU is actually worse), and I think the NPU is to blame.
(Also, the NPUs usually aren't any more separate from the GPU than tensor cores are separate from an Nvidia GPU, they are integrated with the CPU and iGPU.)
The general problem with NPUs for memory-limited tasks is either that the throughput available to them is too low to begin with, or that they're usually constrained to formats that will require wasteful padding/dequantizing when read (at least for newer models) whereas a GPU just does that in local registers.
Jokes aside: they really seem to do some things like live captions and translations. Pretty sure you could also do these things on the iGPU or CPU at a higher power draw.
https://blogs.windows.com/windows-insider/2024/12/18/releasi...
In fairness NPUs can use less hardware resources than a general purpose discrete GPU, thus better for laptop workloads, however we all know that if a discrete GPU is available, there is not a technical reason for not using it, assuming enough local memory is available.
Ah, and NPUs are yet another thing that GNU/Linux folks would have to reverse engineer as well, as on Windows/Android/Apple OSes they are exposed via OS APIs, and there is yet no industry standard for them.
https://github.com/intel/linux-npu-driver
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...
https://www.khronos.org/events/building-the-foundation-for-a...
https://www.pcworld.com/article/2905178/ai-on-the-notebook-t...
Maybe these people aren't that creative....
I was in CES2024 and saw snapdragon X elite chip running a local LLM (llama I believe). How did it turn out? Users cannot use that laptop except for running an LLM. They had no plans for translation layer like Apple Rosetta. Intel would be different for sure in that regard, but I just don't think that it will fly against Ryzen AI chips or Apple silicon.
I agree with losing faith in Intel chips though.
I think maybe what OP meant was that the memory occupied by the model meant you couldn't do anything alongside inferecing, e.g. have a compile job or whatever running (unless you unload the model once you've done asking it questions.)
to be honest, we could really do with RAM abundance. Imagine if 128GB ram became like 8GB ram is today - now that would normalize local LLM inferencing (or atleast, make a decent attempt.)
ofcourse youd need the bandwidth too...
The new intel node seems to be kinda weaker than tsmc's going by the frequency numbers of the CPUs, but what'll matter the most in a laptop is real battery life anyway
Lunar Lake is also very slow in ST and MT compared to Apple.
Qualcomm's X Elite 2 SoCs have a much better chance of duplicating the Macbook experience.
LNL should only power throttle when you go to power saver modes, battery life will suffer when you let it boost high on all cores but you're not getting great battery life when doing heavy all core loads either way. Overall MT should be better on Panther lake with the unified architecture, as afaik LNLs main problem was being too expensive so higher end high core count SKUs were served by mobile arrow lake. And we're also getting what seems to be a very good iGPU while AMD's iGPUs outside of Strix Halo are barely worth talking about
ST is about the same as AMD. Apple being ahead is nothing out of the ordinary since their ARM switch, as there's the node advantage, what I mentioned with the OS, and just better architecture as they plainly have the best people at the moment working at it
Meanwhile, Qualcomm's X Elite 1 did not throttle.
Lunar Lake uses TSMC N3 for compute tile. There is no node advantage. Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.
[0]https://www.pcworld.com/article/2463714/tested-intels-lunar-...
This does also show it not changing in other benchmarks, but I don't have a LNL laptop myself to test things myself, just going off of what people I know tested. It's still also balanced so best performance power plan would I assume push it to use its cores normally - on windows laptops I've owned this could be done with a hotkey.
> Lunar Lake uses TSMC N3 for compute tile. There is no node advantage.
LNL is N3B, Apple is on N3E which is a slight improvement for efficency
> Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.
Like I said they simply have a better architecture at the moment, which also more focused on client that GB benchmarks because their use cases are narrower. If you compare something like optimized SIMD Intel/AMD will come out on top with perf/watt.
And I'm not sure why being behind the market leader would make one lose faith in Intel, their most recent client fuckup was raptor lake instability and I'd say that was handled decently. For now nothing else that'd indicate Windows ARM getting to Apple level battery performance without all of the vertical integration
ETA: looking at things the throttling behaviour seems to be very much OEM dependent, though the tradeoffs will always remain the same
They skipped 5nm and 3nm, and that is indeed an accomplishment.
I hope the yeilds are high.
They have Intel 7, Intel 4, Intel 3 nodes. Anyways, Intel's names do not equal to the same number on TSMC. They're usually 1 or 1.5 generation behind with the same name.
So Intel 3 would be something like TSMC N6.
Logic Density (may be inaccurate, also it's not the only metric for performance ): Raipidus 2nm ≈ TSMC N2 > TSMC N3B > TSMC N3E/P > Intel 18A ≈ Samaung 3GAP
But 18A/20A already has PowerVia, while TSMC will implement Backside Power Delivery in A16 (next generation of N2)
As for comparison between the two: According to TechInsights, Intel's 18A could offer higher performance, whereas TSMC's N2 may provide higher transistor density - [1]
[0] - https://www.tomshardware.com/pc-components/cpus/intel-announ...
[1] - https://www.tomshardware.com/tech-industry/intels-18a-and-ts...
Now, unified memory shared freely between CPU and GPU would be cool, like Apple and AMD SH have, if that’s what you meant.
With Strix Halo there's two ways of going about it; either set how much memory you want allocated to GPU in BIOS (Less desirable), or set the memory allocation to the GPU to 512MB in the BIOS, and the driver will do it all dynamically much like on a Mac.
[0]: https://github.com/ggml-org/llama.cpp/discussions/2182#discu...
Qualcomm's laptop chips thus far have also not had on-package RAM. They have announced that the top model from their upcoming Snapdragon X2 family will have a 192-bit wide memory bus, but the rest will still have a 128-bit memory bus.
Intel Lunar Lake did have on-package RAM, running at 8533 MT/s. This new Panther Lake family from Intel will run at 9600 MT/s for some of the configurations, with off-package RAM. All still with a 128-bit memory bus.
edit: fix typo
https://download.intel.com/newsroom/2026/CES2026/Intel-CES20...
The CPU are also probably also fine!
Intel is so far ahead with consumer multi-chip. AMD has done amazing with having an IOD+CCD (io / core complex dies) chiplet split up (basically having a northbridge on package), but is just trying to figure out how in 2027's Medusa Point they're going to make a decent mainline APU multi-chip, can't keep pushing monolithic APU dies like they have (but they've been excellent FWIW). Like Intel's been doing with sweet EIMB, breaking the work up already, and hopefully is reaping the reward here. Stashing some tiny / very low power cores on the "northbridge" die is a genius move that saves incredible power for light use, a big+little+tiny design that let's the whole CCD shut down while work happens. Some very nice high core configs. Panther Lake could be super exciting.
18A with backside power delivery / "PowerVia" could really be a great leap for Intel! Nice big solid power delivery wins, that could potentially really help. My fingers are so very crossed. Really hope the excitement for this future arriving pans out, at least somewhat!
Their end of year Nova Lake with b(ig)LLC and an even bigger newer NPU6 (any new features beyond TOps?) is also exciting. I hope that also includes the incredible Thunderbolt/USB4 connectivity Intel has typically included on mobile chips but not holding my breath. Every single mobile part is capable of 4X Thunderbolt 5. That is sick. I really hope AMD realizes the ball is in it's court on interconnects at some point!! 20 Lane PCIe configs are also very nice to have for mobile.
Lunar Lake was quite good for what it was, very amazing well integrated chip, with great characteristics. As a 2+4 big/little it wasnt enough for developers. But great consumer chip. I think Intel's really going to have a great total system design with Panther Lake. Yes!
https://www.tomshardware.com/pc-components/cpus/intel-double...
Yes, you do need to spend more energy sending data between chiplets. Intel has been relentlessly optimizing that and is probably the furthest ahead of the game on that, with EIMB and Foveros. AMD just got to a baseline sea-of-wires, where they arent using power hungry PHY to send data, and that is only shipping on Strix Halo at the moment & is slated to be a big change for Zen6. But Intel's been doing all that and more, IMO. https://chipsandcheese.com/p/amds-strix-halo-under-the-hood https://www.techpowerup.com/341445/amd-d2d-interconnect-in-z...
That also has some bandwidth constraints on your system too.
There's the labor cost of doing package assembly! Very non trivial, very scary, very intimidating work. Just knowing that TSMC's Arizona chips have to be shipped back to Taiwan, assembled/packaged there, then potentially shipped where-ever is anec-data but a very real one. This just makes me respect Intel all the more, for having such interesting chips, such as Lakefield ~6 years ago, and their ongoing pursuit of this as a challenge.
So yeah, there are many optimal aspects to a single die. You're making a problem really hard by trying to split it up across chips.
It's not even clear why we want multi chip. As a consumer, if you had your choice, yes, you are right: we do want a big huge slab of a chip. There aren't many structural advantages for us, to get anything other than what we want, on one big chip.
And yet. Your cost savings can potentially be fantastically huge. Yields increase as your square millimeter-age shrinks, at some geometric or some such rate. Being able to push more advanced nodes that don't have the best yields and not have it be an epic fail allows for ongoing innovation & risk acceptance.
There's the modularity dividends. You can also tune appropriately: just as AMD keeps re-using the IOD across generations, Intel can innovate one piece at a time. This again is extremely liberating from a development perspective, to not have to get everything totally right, to be able to suffer faults, but not in the wafer, but at the design level, where maybe ok the new GPU isn't going to ship in 6 months after all, so we'll keep using the old one, but we can still get the rest of the upgrades out.
There's maybe some power wins. I don't really know how much difference it makes, but Intel just shutting down their CCD and using the on IOD (to use AMD's terms) tiny cores is relishably good. It's easy for me to imagine a big NPU or a big GPU that does likewise. I'm expecting similar from AMD with Medusa Point, their 2027 Big APU (but still sub Medusa Halo, which I cannot frelling wait to see).
I think Intel's been super super smart & has incredible vision about where chipmaking is headed, and has been super ahead of the curve. Alas their P-core has been around in one form or another for a long time & is a bit of a hog, and it's been a disaster for shipping new nodes. But I think they're set up well, and, as frustrating and difficult as it is leaving the convenience of a big chip APU, it feels like that time is here, and Intel's top of class at multi-chip, in a way few others are. We are seeing AMD have to do the same (Medua Point).
Optimal is a suboptimal statement. Only the Sith deal in absolutes, Anakin.
Healthy Intel/GF/TSMC competition at the head of the pack is great for the tech industry, and the global economy at large.
Perhaps even more importantly, with armed conflict looming over Taiwan and TSMC... well, enough said.
1) Battery life claims are specific and very impressive, possibly best in class 2) Performance claims are vague and uninspiring.
Either this is an awful press release or this generation isn't taking back the performance crown.
A laser focus on five things is either business nonsense or optics nonsense. Who was this written for?
Intel called it a “one-off mistake”, it’s the best mistake Intel ever made.
On package memory is claimed to be a 40% reduction in power consumption. To beat actual LL by 30%, it means the PL chip must actually be ~58% more efficient in an apples-to-apples non-SoC configuration.
Possible if they doped PL’s silicon with magic pixie dust.
40% reduction in what power consumption? I don't think memory is usually responsible for even 40% of the total SoC + memory power, and bringing memory on-package doesn't make it consume negative power.
https://www.phoronix.com/review/intel-whiskeylake-meteorlake...
But I wont be investing time and money again on Intel while the same anti-engineering beancounter board is still there. For example, they never owned the recent Raptor Lake serious hardware issues and they never showed clients how this will never happen again.
https://en.wikipedia.org/wiki/Raptor_Lake#Instability_and_de... "Intel has decided not to halt sales or recall any units"
The only reason INTC isn't in a death spiral is because the US Govt. won't let that happen
P-Core Max Frequency 5.1 on the highest end, and the lowest at 4.4.
There's no hyperthreading: https://www.pcgamer.com/hardware/processors/now-youve-got-so...
Dunno about AVX and APX. They're not making it easy to find, so... probably not.
https://www.intel.com/content/www/us/en/products/sku/245716/...
Update: Looks like Trump admin converted billions in unpaid CHIPS act grants into an equity in Intel last year https://techhq.com/news/intel-turnaround-strategy-panther-la...
DrammBA•1d ago
What in the world is this disaster of an opening paragraph? From the weird "AI PC platform" (not sure what that is) to the "will be the most broadly adopted and globally available AI PC platform" (is that a promise? a prediction? a threat?).
And you just gotta love the processor names "Intel Core Ultra Series 3 Mobile X9/X7"
dangus•1d ago
It’s an AI PC platform. It can do AI. It has an NPU and integrated GPU. That’s pretty straightforward. Competitors include Apple silicon and AMD Ryzen AI.
They’re predicting it’ll sell well, and they have a huge distribution network with a large number of partner products launching. Basically they’re saying every laptop and similar device manufacturer out there is going to stuff these chips in their systems. I think they just have some well-placed confidence in the laptop segment, because it’s supposed to combine the strong efficiency of the 200 series with the kind of strong performance that can keep up with or exceed competition from AMD’s current laptop product lineup.
Their naming sucks but nobody’s really a saint on that.
webdevver•1d ago
silicon taken up that couldve been used for a few more compute units on the GPU, which is often faster at inference anyway and way more useful/flexible/programmable/documented.
cromka•1d ago
dangus•1d ago
When you use an Apple device, it’s performing ML tasks while barely using any battery life. That’s the whole point of the NPU. It’s not there to outperform the GPU.
zmb_•1d ago
dangus•1d ago
The thing is, when you get an Apple product and you take a picture, those devices are performing ML tasks while sipping battery life.
Microsoft maybe shouldn’t be chasing Apple especially since they don’t actually have any marketshare in tablets or phones, but I see where they’re getting at: they are probably tired of their OS living on devices that get half the battery life of their main competition.
And here’s the thing, Qualcomm’s solution blows Intel out of the water. The only reason not to use it is because Microsoft can’t provide the level of architecture transition that Apple does. Apple can get 100% of their users to switch architecture in about 7 years whenever they want.
stockresearcher•1d ago
The Core Ultra lineup is supposed to be low-power, low-heat, right? If you want more compute power, pick something from a different product series.
wtallis•1d ago
I think that "dark silicon" mentality is mostly lingering trauma from when the industry first hit a wall with the end of Dennard scaling. These days, it's quite clear that you can have a chip that's more or less fully utilized, certainly with no "dark" blocks that are as large as a NPU. You just need to have the ability to run the chip at lower clock speeds to stay within power and thermal constraints—something that was not well-developed in 2005's processors. For the kind of parallel compute that GPUs and NPUs tackle, adding more cores but running them at lower clock speeds and lower voltages usually does result in better efficiency in practice.
The real answer to the GPU vs NPU question isn't that the GPU couldn't grow, but that the NPU has a drastically different architecture making very different power vs performance tradeoffs that theoretically give it a niche of use cases where the NPU is a better choice than the GPU for some inference tasks.
astrange•22h ago
jmward01•1d ago
chrismorgan•1d ago
Oh, the number of times I’ve heard someone assume their five- or ten-year-old machine must be powerful because it’s an i7… no, the i3-14100 (released two years ago) is uniformly significantly superior to the i7-9700 (released five years before that), and only falls behind the i9-9900 in multithreaded performance.
Within the same product family and generation, I expect 9 is better than 7, but honestly it wouldn’t surprise me to find counterexamples.
gambiting•1d ago
Ah the good old Dell laptop engineering, where the i9 is better on paper, but in reality it throttles within 5 seconds of starting any significant load and the cpu nerfs itself below even i5 performance. Classic Dell move.
chrismorgan•1d ago
tracker1•1d ago
Today is almost worse, as the thermal limits will be set entirely different between laptop vendors on the same chips, so you can't even have apples to apples performance expectations from different vendors.
stefanfisk•1d ago
ZiiS•1d ago
flyinglizard•1d ago
stefanfisk•22h ago
Personally I think that Apple should not even be selling the 14” Max when it has this defect.
ZiiS•21h ago
MBCook•1d ago
But at least you always know an A7 is better than an A6 or an A4. The M4 is better than the M3 and M1.
The suffixes make it more complicated, but at least within a suffix group the rule still holds.
ZiiS•2h ago
zuhsetaqi•23h ago
christkv•1d ago
zozbot234•23h ago
tracker1•1d ago
dehrmann•23h ago
mrandish•22h ago
wtallis•22h ago
gambiting•19h ago
(to add insult to the injury - that 3080Ti was literally pointless as the second you started playing any game the entire system would throttle so hard you had extreme stuttering in any game, it was like driving a lamborghini with a 5 second fuel reserve. And given that I worked at a games studio that was kinda an essential feature).
avadodin•1d ago
At best, 14700KF-Intel+AMD might yield relevant results.
octoberfranklin•1d ago
> Are ZBooks good or do I want an OmniBook or ProBook? Within ZBook, is Ultra or Fury better? Do I want a G1a or a G1i? Oh you sell ZBook Firefly G11, I liked that TV show, is that one good?
https://geohot.github.io/blog/jekyll/update/2025/11/29/bikes...
lostlogin•1d ago
kergonath•1d ago
lostlogin•1d ago
What about the iBook? That wasn’t tidy. Ebooks or laptops?
Or the iPhone 9? That didn’t exist.
Or MacOS? Versioning got a bit weird after 10.9, due the X thing.
They do mess around with model numbers and have just done it again with the change to year numbers. I don’t particularly care but they aren’t all clean and pure.
https://daringfireball.net/linked/2025/05/28/gurman-version-...
stefanfisk•1d ago
And what was unclear iBook VS PowerBook?
lostlogin•1d ago
Sorry, I thought you were saying that they don’t use model numbers at all.
I think you were actually saying that they don’t just them for laptops.
wtallis•1d ago
kergonath•1d ago
Back then, there were iBooks (entry-level) and PowerBooks (professional, high performance and expensive). There had been PowerBooks since way back in 1991, well before any ebook reader. I am not sure what your gripe is.
> Or the iPhone 9? That didn’t exist.
There’s a hole in the series. In what way is it a problem, and how on earth is it similar to the situation described in the parent?
> Or MacOS? Versioning got a bit weird after 10.9, due the X thing.
It never got weird. After 10.9.5 came 10.10.0. Version numbers are not decimals.
Seriously, do you have a point apart from "Apple bad"?
lostlogin•1d ago
> It never got weird. After 10.9.5 came 10.10.0. Version numbers are not decimals.
They turned one of the numbers into a letter then started numbering again.
There was Mac OS 9, then Mac OS X. That got incremented up past 10.
You say they don’t mess around with model numbers. Yes they do, with software and hardware.
I like using them both.
kergonath•1d ago
They did not. It has been MacOS X 10.0 through macOS 10.15. In never was X.1 or anything like that.
MBCook•23h ago
The version number the OS reported always said 10.whatever. Exactly as you said.
kergonath•20h ago
lostlogin•9h ago
bebna•1d ago
kergonath•1d ago
If you really want to complain, you can go back to the first unibody MacBook, which did not fit that pattern, or the interim period when high-DPI displays were being rolled out progressively, but let’s be serious. The fact is that even at the worst of times their range could be described in 2 sentences. Now, try to do that for any other computer brand. To my knowledge, he only other with an understandable lineup was Microsoft, before they lost interest.
lostlogin•1d ago
It’s a good time to buy one. They are all good.
It would be interesting to know how many SKUs are hidden behind the simple purchase interface on their site. With the various storage and colour options, it must be over 30.
kergonath•20h ago
librasteve•1d ago
yencabulator•1d ago
And depending on what you're trying to use it for, you need to map it to a string like "MacBookAir10,1" or "A2337" or "Macbook Air Late 2022".
Oh also the Macbook Air (2020) is a different processor architecture than Macbook Air (2020).
kergonath•1d ago
If you need to be technical, System Information says Mac13,1 and these identifiers have been extremely consistent for about 30 years.
Your product number encodes much more information than that, and about the only time when it is actually required is to see whether it is eligible for a recall.
> Oh also the Macbook Air (2020) is a different processor architecture than Macbook Air (2020).
Right, except that one is MacBook Air (retina, 2020), Macbookair9,1, and the other is MacBook Air (M1, 2020), MacBookAir10,1. It happens occasionally, but the fact that you had to go back 5 years to a period in which the lineup underwent a double transition speaks volume.
edgineer•1d ago
lostlogin•1d ago
Looks like it was Notebook in 1982 and Dynabook after that.
https://en.wikipedia.org/wiki/Notebook_computer
jhickok•1d ago
cherioo•1d ago
It’s not really meant for consumer. Who would even visit newsroom.intel.com?
lostlogin•1d ago
What is an AI PC? ('Look, Ma! No Cloud!')
An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. https://newsroom.intel.com/artificial-intelligence/what-is-a...
sidewndr46•1d ago
sixothree•1d ago
fassssst•1d ago
A lot of that is in the first party Mac and Windows apps.
lostlogin•1d ago
MBCook•23h ago
We probably could have done it years earlier. But when it showed up… wow.
olyjohn•17h ago
CyberDildonics•1d ago
hnuser123456•1d ago
ajross•1d ago
It's... the launch vehicle for a new process. Literally the opposite of "cost cutting", they went through the trouble of tooling up a whole fab over multiple years to do this.
Will 18A beat TSMC and save the company? We don't know. But they put down a huge bet that it would, and this is the hand that got dealt. It's important, not something to be dismissed.
hnuser123456•1d ago
If they have actually mostly caught up to TSMC, props, but also, I wish they hadn't given up on EUV for so long. Instead they decided to ship chips overclocked so high they burn out in months.
ajross•1d ago
hnuser123456•1d ago
https://www.tomshardware.com/pc-components/cpus/lunar-lakes-...
ajross•20h ago
Trying to play this news off as "only cost cutting" is, to be blunt, insane. That's not what's happening at all.
Tostino•20h ago
ajross•18h ago
hnuser123456•2h ago
Every single performance figure in TFA is compared to their own older generations, not to competitors.
ac29•23h ago
On package memory is slightly more power efficient but it isnt any faster, it still uses industry standard LPDDR. And Panther Lake supports faster LPDDR than Lunar Lake, so its definitely not a regression.
etempleton•23h ago