frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sugar industry influenced researchers and blamed fat for CVD

https://www.ucsf.edu/news/2016/09/404081/sugar-papers-reveal-industry-role-shifting-national-hear...
198•aldarion•2h ago•107 comments

LaTeX Coffee Stains [pdf] (2021)

https://ctan.math.illinois.edu/graphics/pgf/contrib/coffeestains/coffeestains-en.pdf
122•zahrevsky•2h ago•29 comments

Shipmap.org

https://www.shipmap.org/
85•surprisetalk•1h ago•18 comments

LLM Problems Observed in Humans

https://embd.cc/llm-problems-observed-in-humans
78•js216•1h ago•34 comments

A4 Paper Stories

https://susam.net/a4-paper-stories.html
151•blenderob•4h ago•72 comments

Target has their own forensic lab to investigate shoplifters

https://thehorizonsun.com/features/2024/04/11/the-target-forensics-lab/
28•jeromechoo•1h ago•15 comments

Meditation as Wakeful Relaxation: Unclenching Smooth Muscle

https://psychotechnology.substack.com/p/meditation-as-wakeful-relaxation
44•surprisetalk•1h ago•17 comments

Many Hells of WebDAV: Writing a Client/Server in Go

https://candid.dev/blog/many-hells-of-webdav
22•candiddevmike•1h ago•12 comments

“Stop Designing Languages. Write Libraries Instead” (2016)

https://lbstanza.org/purpose_of_programming_languages.html
168•teleforce•4h ago•139 comments

US Job Openings Decline to Lowest Level in More Than a Year

https://www.bloomberg.com/news/articles/2026-01-07/us-job-openings-decline-to-lowest-level-in-mor...
146•toomuchtodo•1h ago•100 comments

The Case for Nushell (2023)

https://www.sophiajt.com/case-for-nushell/
7•ravenical•41m ago•1 comments

Health care data breach affects over 600k patients, Illinois agency says

https://www.nprillinois.org/illinois/2026-01-06/health-care-data-breach-affects-600-000-patients-...
6•toomuchtodo•28m ago•0 comments

BillG the Manager

https://hardcoresoftware.learningbyshipping.com/p/019-billg-the-manager
5•rbanffy•37m ago•0 comments

Sergey Brin's Unretirement

https://www.inc.com/jessica-stillman/google-co-founder-sergey-brins-unretirement-is-a-lesson-for-...
305•iancmceachern•6d ago•374 comments

Show HN: KeelTest – AI-driven VS Code unit test generator with bug discovery

https://keelcode.dev/keeltest
19•bulba4aur•3h ago•5 comments

Optery (YC W22) Hiring a CISO and Web Scraping Engineers (Node) (US and Latam)

https://www.optery.com/careers/
1•beyondd•4h ago

Show HN: I built a "Do not disturb" Device for my home office

https://apoorv.page/blogs/over-engineered-dnd
8•quacky_batak•4d ago•3 comments

Formal methods only solve half my problems

https://brooker.co.za/blog/2022/06/02/formal.html
58•signa11•4d ago•22 comments

Quake Brutalist Jam III

https://www.slipseer.com/index.php?resources/quake-brutalist-jam-iii.549/
82•Venn1•2d ago•13 comments

Dell's CES 2026 chat was the most pleasingly un-AI briefing I've had in 5 years

https://www.pcgamer.com/hardware/dells-ces-2026-chat-was-the-most-pleasingly-un-ai-briefing-ive-h...
28•mossTechnician•1h ago•6 comments

Vector graphics on GPU

https://gasiulis.name/vector-graphics-on-gpu/
120•gsf_emergency_6•4d ago•27 comments

Creators of Tailwind laid off 75% of their engineering team

https://github.com/tailwindlabs/tailwindcss.com/pull/2388
108•kevlened•53m ago•56 comments

Stop Doom Scrolling, Start Doom Coding: Build via the terminal from your phone

https://github.com/rberg27/doom-coding
528•rbergamini27•21h ago•367 comments

Opus 4.5 is not the normal AI agent experience that I have had thus far

https://burkeholland.github.io/posts/opus-4-5-change-everything/
734•tbassetto•23h ago•1053 comments

The Eric and Wendy Schmidt Observatory System

https://www.schmidtsciences.org/schmidt-observatory-system/
54•pppone•4h ago•40 comments

Electronic nose for indoor mold detection and identification

https://advanced.onlinelibrary.wiley.com/doi/10.1002/adsr.202500124
171•PaulHoule•16h ago•93 comments

A 30B Qwen model walks into a Raspberry Pi and runs in real time

https://byteshape.com/blogs/Qwen3-30B-A3B-Instruct-2507/
308•dataminer•20h ago•108 comments

Show HN: Comet MCP – Give Claude Code a browser that can click

https://github.com/hanzili/comet-mcp
16•hanzili•3d ago•18 comments

Commodore 64 floppy drive has the power to be a computer and runs BASIC

https://www.tomshardware.com/pc-components/cpus/commodore-64-floppy-drive-has-the-power-to-be-a-c...
18•rbanffy•1h ago•9 comments

Show HN: SMTP Tunnel – A SOCKS5 proxy disguised as email traffic to bypass DPI

https://github.com/x011/smtp-tunnel-proxy
112•lobito25•16h ago•38 comments
Open in hackernews

Intel Core Ultra Series 3 Debut as First Built on Intel 18A

https://newsroom.intel.com/client-computing/ces-2026-intel-core-ultra-series-3-debut-first-built-on-intel-18a
111•osnium123•1d ago

Comments

DrammBA•1d ago
> Today at CES, Intel unveiled Intel Core Ultra Series 3 processors, the first AI PC platform built on Intel 18A process technology that was designed and manufactured in the United States. Powering over 200 designs from leading, global partners, Series 3 will be the most broadly adopted and globally available AI PC platform Intel has ever delivered.

What in the world is this disaster of an opening paragraph? From the weird "AI PC platform" (not sure what that is) to the "will be the most broadly adopted and globally available AI PC platform" (is that a promise? a prediction? a threat?).

And you just gotta love the processor names "Intel Core Ultra Series 3 Mobile X9/X7"

dangus•1d ago
Intel marketing isn’t the best but I am struggling to understand what issue you’re taking with this.

It’s an AI PC platform. It can do AI. It has an NPU and integrated GPU. That’s pretty straightforward. Competitors include Apple silicon and AMD Ryzen AI.

They’re predicting it’ll sell well, and they have a huge distribution network with a large number of partner products launching. Basically they’re saying every laptop and similar device manufacturer out there is going to stuff these chips in their systems. I think they just have some well-placed confidence in the laptop segment, because it’s supposed to combine the strong efficiency of the 200 series with the kind of strong performance that can keep up with or exceed competition from AMD’s current laptop product lineup.

Their naming sucks but nobody’s really a saint on that.

webdevver•1d ago
i cant believe we're still putting NPUs into new designs.

silicon taken up that couldve been used for a few more compute units on the GPU, which is often faster at inference anyway and way more useful/flexible/programmable/documented.

cromka•1d ago
Guess they're following Apple here whose NPUs get all the support possible, as far as I can tell.
dangus•1d ago
Bingo. Maybe Microsoft shouldn’t even be chasing them but I think they have a point to try and stay competitive. They can’t just have their OS getting half the battery life of their main competitor.

When you use an Apple device, it’s performing ML tasks while barely using any battery life. That’s the whole point of the NPU. It’s not there to outperform the GPU.

zmb_•1d ago
You can thank Microsoft for that. Intel architects in fact did not want to waste area on an NPU. That caused Microsoft to launch their AI-whatever branded PCs with Qualcomm who were happy to throw in whatever Microsoft wanted to get to be the launch partner. After than Intel had to follow suit to make Microsoft happy.
dangus•1d ago
That doesn’t explain why Apple “wastes” die area on their NPU.

The thing is, when you get an Apple product and you take a picture, those devices are performing ML tasks while sipping battery life.

Microsoft maybe shouldn’t be chasing Apple especially since they don’t actually have any marketshare in tablets or phones, but I see where they’re getting at: they are probably tired of their OS living on devices that get half the battery life of their main competition.

And here’s the thing, Qualcomm’s solution blows Intel out of the water. The only reason not to use it is because Microsoft can’t provide the level of architecture transition that Apple does. Apple can get 100% of their users to switch architecture in about 7 years whenever they want.

stockresearcher•1d ago
Every modern chip needs some percentage dedicated to dark silicon. There is no cheating the thermal reality. You could add more compute units in the GPU, but you then have to make up for it somewhere else. It’s a balancing act.

The Core Ultra lineup is supposed to be low-power, low-heat, right? If you want more compute power, pick something from a different product series.

wtallis•1d ago
> Every modern chip needs some percentage dedicated to dark silicon. There is no cheating the thermal reality. You could add more compute units in the GPU, but you then have to make up for it somewhere else. It’s a balancing act.

I think that "dark silicon" mentality is mostly lingering trauma from when the industry first hit a wall with the end of Dennard scaling. These days, it's quite clear that you can have a chip that's more or less fully utilized, certainly with no "dark" blocks that are as large as a NPU. You just need to have the ability to run the chip at lower clock speeds to stay within power and thermal constraints—something that was not well-developed in 2005's processors. For the kind of parallel compute that GPUs and NPUs tackle, adding more cores but running them at lower clock speeds and lower voltages usually does result in better efficiency in practice.

The real answer to the GPU vs NPU question isn't that the GPU couldn't grow, but that the NPU has a drastically different architecture making very different power vs performance tradeoffs that theoretically give it a niche of use cases where the NPU is a better choice than the GPU for some inference tasks.

astrange•22h ago
NPUs aren't designed to be "faster", they are designed to have better perf/power ratios.
jmward01•1d ago
I think I have given up on chip naming. I honestly can't tell anymore there are so many modifiers on the names these days. I assume 9 is better than 7 right? Right?
chrismorgan•1d ago
> I assume 9 is better than 7 right? Right?

Oh, the number of times I’ve heard someone assume their five- or ten-year-old machine must be powerful because it’s an i7… no, the i3-14100 (released two years ago) is uniformly significantly superior to the i7-9700 (released five years before that), and only falls behind the i9-9900 in multithreaded performance.

Within the same product family and generation, I expect 9 is better than 7, but honestly it wouldn’t surprise me to find counterexamples.

gambiting•1d ago
>>Within the same product family and generation, I expect 9 is better than 7

Ah the good old Dell laptop engineering, where the i9 is better on paper, but in reality it throttles within 5 seconds of starting any significant load and the cpu nerfs itself below even i5 performance. Classic Dell move.

chrismorgan•1d ago
Within the same family and generation, I don’t think this should happen any more. But especially in the past, some laptops were configurable with processors of different generations or families (M, Q, QM, U, so many possibilities) so that the i7 option might have worse real-world performance than the i5 (due to more slower cores).
tracker1•1d ago
It's been a cooling problem on a lot of i9 laptops... the CPU will hit thermal peaks, then throttle down, this has an incredibly janky feel as a user... then it spins back up, and down... the performance curves just wacky in general.

Today is almost worse, as the thermal limits will be set entirely different between laptop vendors on the same chips, so you can't even have apples to apples performance expectations from different vendors.

stefanfisk•1d ago
Apple had the same problem before they launched the M1. Unless your workloads are extremely bursty the i9 MacBook is almost guaranteed to be slower than the base i7.
ZiiS•1d ago
Even thier ultra efficent silicon didn't fully solve this; a 16" M4 Pro often outperforms a 14" M4 Max stuck throttling.
flyinglizard•1d ago
Are they throttling with the fan off? Because I don't recall ever hearing the fan on my M3 Max 14" (granted no heavy deliberate computational beyond regular dev work).
stefanfisk•22h ago
AFAIK it’s only something that happens under sustained heavy load. The 14” Max should still outperform the Pro for shorter tasks but I’d reckon few people buy the most expensive machine for such use cases.

Personally I think that Apple should not even be selling the 14” Max when it has this defect.

ZiiS•21h ago
No this shows up when you really fully load them and the fans can't keep up. Most people never do, but then why buy the Max?
MBCook•1d ago
I can’t comment on that.

But at least you always know an A7 is better than an A6 or an A4. The M4 is better than the M3 and M1.

The suffixes make it more complicated, but at least within a suffix group the rule still holds.

ZiiS•2h ago
But if you buy a Mac Studio today, you have to choose a M4 Max or a much faster M3 Ultra.
zuhsetaqi•23h ago
First time I’m hearing this. Do you have any sources on this?
christkv•1d ago
I still have the i9 macbook pro and its a dog for sure throttles massively
zozbot234•23h ago
The latest iPhone base model performs better than the iPhone Air despite the latter having a Pro chip, because that Pro is so badly throttled due to the device form factor.
tracker1•1d ago
Same for the later generation Intel Macbook Pros... The i9 was so bad, and the throttling made it practically unusable for me. If it weren't a work issued laptop, I'd have either returned it, or at least under-volted and under-clocked it so it didn't hiccup every time I did anything at all.
dehrmann•23h ago
I had an X1 Carbon like this, only it'd crash for no apparent reason. The internet consensus that Lenovo wouldn't own up to was that the i7 CPUs were overpowered for the cooling, so your best bet is either underthrottling them or getting an i5.
mrandish•22h ago
Yeah, putting an i9 in any laptop that's not an XL gaming rig with big fans is very nearly always a waste of money (there might exist a few rare exceptions for some oddball workloads). Manufacturers selling i9s in thin & light laptops at an ultra price premium may fall just short of the legal definition of fraud but it's as unconscionable as snake-oil audiophile companies selling $500 USB cables.
wtallis•22h ago
That's still assigning too much significance to the "i9" naming. Sometimes, the only difference between the i9 part and the top i7 part was something like 200MHz of single-core boost frequency, with the core counts and cache sizes and maximum power limit all being equal. So quite often, the i7 has stood to gain just as much from a higher-power form factor as the i9.
gambiting•19h ago
Tbf 2 jobs ago I had a Dell enterprise workstation laptop, an absolute behemoth of a thing, it was like 3.5kg, it was the thicker variant of the two available with extra cooling, specifically sold to companies like ours needing that extra firepower, and it had a 20 core i9, 128GB of DDR5 CAMM ram, and a 3080Ti - I think the market price of that thing was around £14k, it was insane. And it had exactly that kind of behaviour I described - I would start compiling something in Visual Studio, I would briefly see all cores jump to 4GHz and then immediately throttle down to 1.2GHz, to a point where the entire laptop was unresponsive while the compilation was ongoing. It was a joke of a machine - I think that's more of a fraud than what you described, because companies like ours were literally buying hundreds of these from Dell and they were literally unsuitable for their advertised use.

(to add insult to the injury - that 3080Ti was literally pointless as the second you started playing any game the entire system would throttle so hard you had extreme stuttering in any game, it was like driving a lamborghini with a 5 second fuel reserve. And given that I worked at a games studio that was kinda an essential feature).

avadodin•1d ago
A machine learning model can place a CPU on the versioning manifold but I'm not confident that it could translate it to human speech in a way that was significantly more useful than what we have now.

At best, 14700KF-Intel+AMD might yield relevant results.

octoberfranklin•1d ago
Laptop names are even worse:

> Are ZBooks good or do I want an OmniBook or ProBook? Within ZBook, is Ultra or Fury better? Do I want a G1a or a G1i? Oh you sell ZBook Firefly G11, I liked that TV show, is that one good?

https://geohot.github.io/blog/jekyll/update/2025/11/29/bikes...

lostlogin•1d ago
And that root of all that shit lies Apple and the ‘book’ suffix.
kergonath•1d ago
Apple is very consistent. You have the MacBook Air (lighter, more portable variant) and the MacBook Pro (more expensive and powerful variant). They don’t mess around with model numbers.
lostlogin•1d ago
> Apple is very consistent. You have the MacBook Air (lighter, more portable variant) and the MacBook Pro (more expensive and powerful variant).

What about the iBook? That wasn’t tidy. Ebooks or laptops?

Or the iPhone 9? That didn’t exist.

Or MacOS? Versioning got a bit weird after 10.9, due the X thing.

They do mess around with model numbers and have just done it again with the change to year numbers. I don’t particularly care but they aren’t all clean and pure.

https://daringfireball.net/linked/2025/05/28/gurman-version-...

stefanfisk•1d ago
It was a response to you specifically calling out the book suffix.

And what was unclear iBook VS PowerBook?

lostlogin•1d ago
The iBook store.

Sorry, I thought you were saying that they don’t use model numbers at all.

I think you were actually saying that they don’t just them for laptops.

wtallis•1d ago
"iBook" referred to a laptop from 1999 to 2006. "iBooks" referred to the eBook reader app and store from 2010 to 2019. I'll grant that there is some possibility for confusion, but only if the context of the conversation spans multiple decades but doesn't make it clear whether you're talking about hardware or software.
kergonath•1d ago
> What about the iBook? That wasn’t tidy. Ebooks or laptops?

Back then, there were iBooks (entry-level) and PowerBooks (professional, high performance and expensive). There had been PowerBooks since way back in 1991, well before any ebook reader. I am not sure what your gripe is.

> Or the iPhone 9? That didn’t exist.

There’s a hole in the series. In what way is it a problem, and how on earth is it similar to the situation described in the parent?

> Or MacOS? Versioning got a bit weird after 10.9, due the X thing.

It never got weird. After 10.9.5 came 10.10.0. Version numbers are not decimals.

Seriously, do you have a point apart from "Apple bad"?

lostlogin•1d ago
You were saying that Apple is very consistent. I’m pointing out they aren’t particularly.

> It never got weird. After 10.9.5 came 10.10.0. Version numbers are not decimals.

They turned one of the numbers into a letter then started numbering again.

There was Mac OS 9, then Mac OS X. That got incremented up past 10.

You say they don’t mess around with model numbers. Yes they do, with software and hardware.

I like using them both.

kergonath•1d ago
> They turned one of the numbers into a letter then started numbering again.

They did not. It has been MacOS X 10.0 through macOS 10.15. In never was X.1 or anything like that.

MBCook•23h ago
Right. MacOS X was the marketing name. But it was pronounced 10, just a stylization with Roman numerals.

The version number the OS reported always said 10.whatever. Exactly as you said.

kergonath•20h ago
Yes, and you did sound silly when saying it out loud the official way (OS ten ten ten was a famous one, for Yosemite).
lostlogin•9h ago
I stand corrected. I thought the X(10) was part of the version number, not a prefix that got added.
bebna•1d ago
I got a MacBook. No, not an air or pro, just MacBook.
kergonath•1d ago
Back when there were MacBooks, it was MacBook (standard model), MacBook Air (lighter variant), and MacBook Pro (more expensive, high-performance variant). Sure, 3 is more complicated than 2, but come on.

If you really want to complain, you can go back to the first unibody MacBook, which did not fit that pattern, or the interim period when high-DPI displays were being rolled out progressively, but let’s be serious. The fact is that even at the worst of times their range could be described in 2 sentences. Now, try to do that for any other computer brand. To my knowledge, he only other with an understandable lineup was Microsoft, before they lost interest.

lostlogin•1d ago
> The fact is that even at the worst of times their range could be described in 2 sentences.

It’s a good time to buy one. They are all good.

It would be interesting to know how many SKUs are hidden behind the simple purchase interface on their site. With the various storage and colour options, it must be over 30.

kergonath•20h ago
Loads, I assume. But those are things like "MacBook Pro M1 Max with a 1TB SSD and a matte screen coating" versus "MacBook Pro M1 with a 256GB SSD and a standard screen". The granularity of say Dell’s product numbers is not enough for that either, and you still need a long product number when searching their knowledge base.
librasteve•1d ago
waiting for a MacBook Vapour
yencabulator•1d ago
Apple is so "consistent" the way to know which kind of an Air or Pro it is, is to find the tiny print on the bottom that's a jumble of letters like "MGNE3" and google it.

And depending on what you're trying to use it for, you need to map it to a string like "MacBookAir10,1" or "A2337" or "Macbook Air Late 2022".

Oh also the Macbook Air (2020) is a different processor architecture than Macbook Air (2020).

kergonath•1d ago
The canonical way if you need a version number is the "about this Mac" dialog (here it says Mac Studio 2022).

If you need to be technical, System Information says Mac13,1 and these identifiers have been extremely consistent for about 30 years.

Your product number encodes much more information than that, and about the only time when it is actually required is to see whether it is eligible for a recall.

> Oh also the Macbook Air (2020) is a different processor architecture than Macbook Air (2020).

Right, except that one is MacBook Air (retina, 2020), Macbookair9,1, and the other is MacBook Air (M1, 2020), MacBookAir10,1. It happens occasionally, but the fact that you had to go back 5 years to a period in which the lineup underwent a double transition speaks volume.

edgineer•1d ago
Apple did not invent the -book suffix for model names of notebook computers.
lostlogin•1d ago
Thanks - I didn’t know that.

Looks like it was Notebook in 1982 and Dynabook after that.

https://en.wikipedia.org/wiki/Notebook_computer

jhickok•1d ago
TIL Geohot pretty much want the exact same thing in a laptop. Basically a Macbook Pro running Linux.
cherioo•1d ago
AI PC has been in the buzz for more than 2 years now (despite itself being a near useless concept), and intel has like 75% marketshare for laptop. Both of those are well with in norm for an intel marketing piece.

It’s not really meant for consumer. Who would even visit newsroom.intel.com?

lostlogin•1d ago
Apparently it’s been a thing for a while:

What is an AI PC? ('Look, Ma! No Cloud!')

An AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. https://newsroom.intel.com/artificial-intelligence/what-is-a...

sidewndr46•1d ago
It'd be interesting to see some market survey data showing the number of AI laptops sold & the number of users that actively use the acceleration capabilities for any task, even once.
sixothree•1d ago
I'm not sure I've ever heard of a single task that comes built into the system and uses the NPU.
fassssst•1d ago
Remove background from an image. Summarize some text. OCR to select text or click links in a screenshot. Relighting and centering you in your webcam. Semantic search for images and files.

A lot of that is in the first party Mac and Windows apps.

lostlogin•1d ago
Selecting text in a photo is a game changer. I love it.
MBCook•23h ago
Wasn’t built in OCR an amazing feature?

We probably could have done it years earlier. But when it showed up… wow.

olyjohn•17h ago
CES stands for Consumer Electronics Show last I checked.
CyberDildonics•1d ago
It's a disaster along with the title. There isn't a lot of clear information.
hnuser123456•1d ago
It means they did cost cutting on Lunar Lake and are excited to sell a lot of them at similar or higher prices.
ajross•1d ago
> cost cutting on Lunar Lake

It's... the launch vehicle for a new process. Literally the opposite of "cost cutting", they went through the trouble of tooling up a whole fab over multiple years to do this.

Will 18A beat TSMC and save the company? We don't know. But they put down a huge bet that it would, and this is the hand that got dealt. It's important, not something to be dismissed.

hnuser123456•1d ago
Lunar Lake integrated DRAM on the package, which was faster and more power efficient, this reverts that. They also replaced part of the chip from being sourced from TSMC to from themselves. And if their foundry is competitive, they should be shaking other foundry customers down the way TSMC is.

If they have actually mostly caught up to TSMC, props, but also, I wish they hadn't given up on EUV for so long. Instead they decided to ship chips overclocked so high they burn out in months.

ajross•1d ago
I don't see how any of that substantiates "Panther Lake and 18A are just cost cutting efforts vs. Lunar Lake". It mostly just sounds like another boring platform flame.
hnuser123456•1d ago
I'll let Intel speak for themselves here:

https://www.tomshardware.com/pc-components/cpus/lunar-lakes-...

ajross•20h ago
Again, you're talking about the design of Panther Lake, the CPU IC. No one cares, it's a CPU. The news here is the launch of the Intel 18A semiconductor process and the discussion as to if and how it narrows or closes the gap with TSMC.

Trying to play this news off as "only cost cutting" is, to be blunt, insane. That's not what's happening at all.

Tostino•20h ago
I'm not GP, but I think that it really doesn't matter if Intel is able to sell this process to other companies. But if they're only producing their own chips on it, that's quite a valid criticism.
ajross•18h ago
And for the fourth time, it may be a valid "criticism" in the sense of "Does Intel Suck or Rule?". It does not validate the idea that this product release, which introduces the most competitive process from this company in over a decade, is merely a "cost reduction" change.
hnuser123456•2h ago
It's only as exciting as a cost reduction because they're playing catch-up by trying to not need to outsource their highest performance silicon. Let me know when Intel gets perf/watt to be high enough to be of interest to Apple, gamers, or anyone who isn't just buying a basic PC because their old one died, or an Intel server because that's what they've always had.

Every single performance figure in TFA is compared to their own older generations, not to competitors.

ac29•23h ago
> Lunar Lake integrated DRAM on the package, which was faster and more power efficient, this reverts that.

On package memory is slightly more power efficient but it isnt any faster, it still uses industry standard LPDDR. And Panther Lake supports faster LPDDR than Lunar Lake, so its definitely not a regression.

etempleton•23h ago
Cost cutting? 18a probably has more invested in it then every other process Intel has ever produced combined.
glzone1•1d ago
If they are going to be the most broadly adopted AI platform where does that leave nvidia?

What is the AI PC platform? The experience on windows with windows 11 for just the basic UI of the start menu leaves a lot to be desired, is copilot adoption on windows that popular and does it take advantage of this AI PC platform?

Ryzen AI 400 mobile CPU chips are also releasing soon (though RocM is still blah I think)

Nvidia is still playing in the AI space despite all the noise of others on their AI offerings - and despite intel hype, Nividias margins at least recently have been incredible (ie, people still using them) so their platform hasn't yet been killed by intel's "most widely adoptoped" AI platform offering

Traster•1d ago
Firstly,

>Series 3 will be the most broadly adopted and globally available AI PC platform Intel has ever delivered.

The true competitor is Ryzen AI, Nvidia doesn't produce these integrated CPU/GPU/AI products in the PC segment at all.

zamadatix•1d ago
How broad your PC AI hardware adoption is matters little when the overwhelming majority of users use cloud hosted AI.
zapnuk•1d ago
I assume its still x86-64?

What actually makes it an AI platform? Some tight integration of an intel ARC GPU, similar to the Apple M series processors?

They claim 2-5x performance for soem AI workloads. But aren't they still limited by memory? The same limitation as always in consumer hardware?

I don't think it matters much if you're limited by a nvidia gpu with ~max 16gb or some new intel processor with similar memory.

Nice to have more options though. Kinda wish the intel arc gpu would be developed into an alternative for self hosted LLMs. 70b models can be quite good but still difficult / slow to use self-hosted.

vbezhenar•1d ago
These processors have NPU (Neural Processing Unit) which is supposed to accelerate some small local neural networks. Nvidia RTX GPUs have much more powerful NPUs, so it's more about laptops without discrete GPU.
distances•1d ago
And as far as I can see, it's a total waste of silicon. Anything running in it will anyway be so underpowered that it doesn't matter. It'd be better to dedicate the transistors to the GPU.

The latest Ryzen mobile CPU line didn't improve performance compared to its predecessor (the integrated GPU is actually worse), and I think the NPU is to blame.

wtallis•1d ago
If you ask NVIDIA, inference should always run on the GPU. If you ask anybody else designing chips for consumer devices, they say there's a benefit to having a low-power NPU that's separate from the GPU.
dragonwriter•1d ago
Okay, yeah, and those manufacturers’ opinions are both obvious reflections of market position independent of the merits, what do people who actually run inference say?

(Also, the NPUs usually aren't any more separate from the GPU than tensor cores are separate from an Nvidia GPU, they are integrated with the CPU and iGPU.)

Spellman•23h ago
Depends on how big the NPU is and how much power/memory the inference model needs.
zozbot234•23h ago
If you're running an LLM there's a benefit in shifting prompt pre-processing to the NPU. More generally, anything that's memory-throughput limited should stay on the GPU, while the NPU can aid compute-limited tasks to at least some extent.

The general problem with NPUs for memory-limited tasks is either that the throughput available to them is too low to begin with, or that they're usually constrained to formats that will require wasteful padding/dequantizing when read (at least for newer models) whereas a GPU just does that in local registers.

gambiting•1d ago
But like.....what for example. As a normal windows PC user, what kind of software can I run that will benefit from that NPU at all?
KeplerBoy•1d ago
We don't ask that question. In reality everything is done in the cloud. Maybe they package some camera app that applies snapchat-like filters with NPUs, but that's about the extent of it.

Jokes aside: they really seem to do some things like live captions and translations. Pretty sure you could also do these things on the iGPU or CPU at a higher power draw.

https://blogs.windows.com/windows-insider/2024/12/18/releasi...

pjmlp•1d ago
It is another way Microsoft has tried to cater to OEMs as means to bring PC sales back to the glory exponential growth days, especially under the CoPilot+ PC branding, nowadays still siloed into Windows ARM.

In fairness NPUs can use less hardware resources than a general purpose discrete GPU, thus better for laptop workloads, however we all know that if a discrete GPU is available, there is not a technical reason for not using it, assuming enough local memory is available.

Ah, and NPUs are yet another thing that GNU/Linux folks would have to reverse engineer as well, as on Windows/Android/Apple OSes they are exposed via OS APIs, and there is yet no industry standard for them.

vbezhenar•1d ago
It's open source:

https://github.com/intel/linux-npu-driver

https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

pjmlp•1d ago
That is not an industry standard that works across vendors in an OS and GPU agnostic way, which is why Khronos has started a new standard effort.

https://www.khronos.org/events/building-the-foundation-for-a...

kakacik•1d ago
1) tick AI checkbox 2) ??? 3) profit
justin66•1d ago
They're going to find a way to accelerate the Windows start menu with it.
gambiting•1d ago
God I hope so
mminer237•1d ago
Oh boy, instead of building an efficient index or optimizing the start menu or its built-in web browser, they're adding more power usage to make the computer randomly guess what I want returned since they still can't figure out how to return search results of what you actually typed.
SirMaster•1d ago
Windows Recall?
undersuit•23h ago
Try searching for some like "My mouse pointer is too small"

https://x.com/rfleury/status/2007964012923994364

gambiting•20h ago
Incredible. 100% typical microsoft though. I'm a "veteran" windows/xbox developer and none of this surprises me.
wmf•23h ago
https://www.microsoft.com/en-us/windows/ai-features

https://www.pcworld.com/article/2905178/ai-on-the-notebook-t...

gambiting•20h ago
No for sure, but afaik you get all of those features even if you don't have an NPU. And even if you do have one, it's unclear to me which one of them actually use the NPU for extra power or if they all just run on the CPU. Like the thing that is missing for me is "this is the thing that you can only do on a Copilot PC and it's not available otherwise".
KeplerBoy•1d ago
Are we calling tensor cores NPUs now?
Marsymars•1d ago
How did we end up with Tensor Cores and a Tensor SoC from two different companies?
mrguyorama•17h ago
The same way we ended up with both Groq and Grok branded LLMs

Maybe these people aren't that creative....

sbinnee•1d ago
I will wait for the actual reviews from users. But I lost faith in Intel chips.

I was in CES2024 and saw snapdragon X elite chip running a local LLM (llama I believe). How did it turn out? Users cannot use that laptop except for running an LLM. They had no plans for translation layer like Apple Rosetta. Intel would be different for sure in that regard, but I just don't think that it will fly against Ryzen AI chips or Apple silicon.

ZuLuuuuuu•1d ago
Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs? Qualcomm and Microsoft already has a translation layer named Prism (not as good as Rosetta but pretty good nevertheless): https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x8...

I agree with losing faith in Intel chips though.

webdevver•1d ago
>Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs?

I think maybe what OP meant was that the memory occupied by the model meant you couldn't do anything alongside inferecing, e.g. have a compile job or whatever running (unless you unload the model once you've done asking it questions.)

to be honest, we could really do with RAM abundance. Imagine if 128GB ram became like 8GB ram is today - now that would normalize local LLM inferencing (or atleast, make a decent attempt.)

ofcourse youd need the bandwidth too...

blell•1d ago
Prism is not as good as Rosetta 2? At least Prism supports AVX.
Numerlor•1d ago
Lost faith from what? On x86 mobile Lunar lake chips are the clear best for battery life at the moment, and mobile arrowlake is competitive with amd's offerings. Only thing they're missing is a Strix halo equivalent but AMD messed that one up and there's like 2 laptops with it.

The new intel node seems to be kinda weaker than tsmc's going by the frequency numbers of the CPUs, but what'll matter the most in a laptop is real battery life anyway

aurareturn•1d ago
Lunar Lake throttles a lot. It can lose 50% of its performance on battery life. It's not the same as Apple Silicon where the performance is exactly the same plugged in or not.

Lunar Lake is also very slow in ST and MT compared to Apple.

Qualcomm's X Elite 2 SoCs have a much better chance of duplicating the Macbook experience.

Numerlor•1d ago
Nobody is duplicating the macbook experience because Apple is integrating both hardware and os, while others are fighting Windows, and OEMs being horrible at firmware.

LNL should only power throttle when you go to power saver modes, battery life will suffer when you let it boost high on all cores but you're not getting great battery life when doing heavy all core loads either way. Overall MT should be better on Panther lake with the unified architecture, as afaik LNLs main problem was being too expensive so higher end high core count SKUs were served by mobile arrow lake. And we're also getting what seems to be a very good iGPU while AMD's iGPUs outside of Strix Halo are barely worth talking about

ST is about the same as AMD. Apple being ahead is nothing out of the ordinary since their ARM switch, as there's the node advantage, what I mentioned with the OS, and just better architecture as they plainly have the best people at the moment working at it

aurareturn•1d ago
LNL throttles heavily even on the default profile, not just power saver modes.[0]

Meanwhile, Qualcomm's X Elite 1 did not throttle.

Lunar Lake uses TSMC N3 for compute tile. There is no node advantage. Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.

[0]https://www.pcworld.com/article/2463714/tested-intels-lunar-...

Numerlor•4h ago
> LNL throttles heavily even on the default profile, not just power saver modes.

This does also show it not changing in other benchmarks, but I don't have a LNL laptop myself to test things myself, just going off of what people I know tested. It's still also balanced so best performance power plan would I assume push it to use its cores normally - on windows laptops I've owned this could be done with a hotkey.

> Lunar Lake uses TSMC N3 for compute tile. There is no node advantage.

LNL is N3B, Apple is on N3E which is a slight improvement for efficency

> Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.

Like I said they simply have a better architecture at the moment, which also more focused on client that GB benchmarks because their use cases are narrower. If you compare something like optimized SIMD Intel/AMD will come out on top with perf/watt.

And I'm not sure why being behind the market leader would make one lose faith in Intel, their most recent client fuckup was raptor lake instability and I'd say that was handled decently. For now nothing else that'd indicate Windows ARM getting to Apple level battery performance without all of the vertical integration

ETA: looking at things the throttling behaviour seems to be very much OEM dependent, though the tradeoffs will always remain the same

WithinReason•1d ago
Their comparisons claim to be performing better than both
cubefox•1d ago
Any speculation on what the equivalent TSMC node is for Intel 18A?
tuananh•1d ago
tsmc n2 i think (2nm)
compounding_it•1d ago
If that is the case then it's interesting how intel managed to catch up so quickly.
chasil•1d ago
After their 7/10nm delay, they are bringing 2nm into production.

They skipped 5nm and 3nm, and that is indeed an accomplishment.

I hope the yeilds are high.

aurareturn•1d ago
They didn't skip it.

They have Intel 7, Intel 4, Intel 3 nodes. Anyways, Intel's names do not equal to the same number on TSMC. They're usually 1 or 1.5 generation behind with the same name.

So Intel 3 would be something like TSMC N6.

Dylan16807•1d ago
Wow, I didn't realize Intel was slacking on top of their node rename. Sure their "10nm" was ahead at the time, but if they'd left the numbers alone they'd be a much closer match to everyone else today instead of even further off.
tromp•1d ago
Closer to n3 or n5 I would think. Intel's node numbering is far more aspirational than TSMCs.
mhh__•1d ago
If it's still being updated the wikipedia article about semi fabrication has a table with some reasonably comparable numbers (when known) for Intel X and TSMC Y
ytch•1d ago
https://x.com/Kurnalsalts/status/1962173515815424003

Logic Density (may be inaccurate, also it's not the only metric for performance ): Raipidus 2nm ≈ TSMC N2 > TSMC N3B > TSMC N3E/P > Intel 18A ≈ Samaung 3GAP

But 18A/20A already has PowerVia, while TSMC will implement Backside Power Delivery in A16 (next generation of N2)

aurareturn•1d ago
So 18A is roughly TSMC N4P. N4P is part of the N5 family.
etempleton•23h ago
18a supposedly has some advantages in power efficiency and some other areas compared to TSMCs approach. Ultimately, TSMC doesn’t have a 2nm product yet, so it is a pretty big deal Intel is competitive again with TSMC latest. Samsung is incredibly far behind at this point.
aurareturn•7h ago
TSMC commenced N2 mass production last month.
signatoremo•1d ago
It’s the final phase of Intel’s 5N4Y plan aimed at reaching parity with TSMC by the end of 2025, so it’s comparable to TSMC’s most advanced node N2 - [0]

As for comparison between the two: According to TechInsights, Intel's 18A could offer higher performance, whereas TSMC's N2 may provide higher transistor density - [1]

[0] - https://www.tomshardware.com/pc-components/cpus/intel-announ...

[1] - https://www.tomshardware.com/tech-industry/intels-18a-and-ts...

sandGorgon•1d ago
this doesnt have integrated ram like lunar lake right ?
klardotsh•1d ago
Nearly all modern SOCs have built in RAM now. Apple Silicon does it, AMD Strix Halo and beyond do it, Intel Lunar Lake does it, most ARM SOCs from vendors other than Apple do it…

Now, unified memory shared freely between CPU and GPU would be cool, like Apple and AMD SH have, if that’s what you meant.

notenlish•1d ago
Isnt Strix Halo and apple's m series a bit different. Iirc you need to choose how much ram will be allocated to the igpu, whereas on mac it is all handled dynamically.
Tsiklon•1d ago
On the Mac it's all dynamically handled.

With Strix Halo there's two ways of going about it; either set how much memory you want allocated to GPU in BIOS (Less desirable), or set the memory allocation to the GPU to 512MB in the BIOS, and the driver will do it all dynamically much like on a Mac.

adgjlsfhk1•22h ago
strix also does it dynamically, just with a limit (which is generally set to ~75% of your total RAM)
einsteinx2•1h ago
You can also change it manually on macOS [0], though yes by default it’s automatic but has a much lower limit (I think 2/3 is total RAM or something).

[0]: https://github.com/ggml-org/llama.cpp/discussions/2182#discu...

wtallis•1d ago
AMD Strix Halo does not have on-package RAM. What makes it stand out from other x86 SoCs is that it has more memory channels, for a total of a 256-bit wide bus compared to 128-bit wide for all other recent consumer x86 processors.

Qualcomm's laptop chips thus far have also not had on-package RAM. They have announced that the top model from their upcoming Snapdragon X2 family will have a 192-bit wide memory bus, but the rest will still have a 128-bit memory bus.

Intel Lunar Lake did have on-package RAM, running at 8533 MT/s. This new Panther Lake family from Intel will run at 9600 MT/s for some of the configurations, with off-package RAM. All still with a 128-bit memory bus.

usagisushi•1d ago
No, it's more like a power-efficient Arrow Lake, with fewer P-cores and more LE-cores. (e.g. P+E+LE: AL 6+8+2 vs. PL 4+8+4)

edit: fix typo

tanh•1d ago
The list of claims and what was tested in the comparison can be found here https://edc.intel.com/content/www/us/en/products/performance...
shihab•1d ago
I was surprised by that defiant tone there in an official page. But it's missing actual numbers, which makes it all pretty strange.
adrian_b•1d ago
The numbers and the benchmarking conditions are in the PDF linked on that page:

https://download.intel.com/newsroom/2026/CES2026/Intel-CES20...

jauntywundrkind•1d ago
Xe3 GPU could be super super super great. Xe2 is already very strong, and this could really be an incredible breakout moment.

The CPU are also probably also fine!

Intel is so far ahead with consumer multi-chip. AMD has done amazing with having an IOD+CCD (io / core complex dies) chiplet split up (basically having a northbridge on package), but is just trying to figure out how in 2027's Medusa Point they're going to make a decent mainline APU multi-chip, can't keep pushing monolithic APU dies like they have (but they've been excellent FWIW). Like Intel's been doing with sweet EIMB, breaking the work up already, and hopefully is reaping the reward here. Stashing some tiny / very low power cores on the "northbridge" die is a genius move that saves incredible power for light use, a big+little+tiny design that let's the whole CCD shut down while work happens. Some very nice high core configs. Panther Lake could be super exciting.

18A with backside power delivery / "PowerVia" could really be a great leap for Intel! Nice big solid power delivery wins, that could potentially really help. My fingers are so very crossed. Really hope the excitement for this future arriving pans out, at least somewhat!

Their end of year Nova Lake with b(ig)LLC and an even bigger newer NPU6 (any new features beyond TOps?) is also exciting. I hope that also includes the incredible Thunderbolt/USB4 connectivity Intel has typically included on mobile chips but not holding my breath. Every single mobile part is capable of 4X Thunderbolt 5. That is sick. I really hope AMD realizes the ball is in it's court on interconnects at some point!! 20 Lane PCIe configs are also very nice to have for mobile.

Lunar Lake was quite good for what it was, very amazing well integrated chip, with great characteristics. As a 2+4 big/little it wasnt enough for developers. But great consumer chip. I think Intel's really going to have a great total system design with Panther Lake. Yes!

https://www.tomshardware.com/pc-components/cpus/intel-double...

wmf•23h ago
For a laptop chip the optimal dsesign is a single die. Apple, Qualcomm, and AMD agree on this. Chiplets are a last resort when you can't afford a single die due to yield or mask costs.
jauntywundrkind•20h ago
It feels like a true until it's not problem.

Yes, you do need to spend more energy sending data between chiplets. Intel has been relentlessly optimizing that and is probably the furthest ahead of the game on that, with EIMB and Foveros. AMD just got to a baseline sea-of-wires, where they arent using power hungry PHY to send data, and that is only shipping on Strix Halo at the moment & is slated to be a big change for Zen6. But Intel's been doing all that and more, IMO. https://chipsandcheese.com/p/amds-strix-halo-under-the-hood https://www.techpowerup.com/341445/amd-d2d-interconnect-in-z...

That also has some bandwidth constraints on your system too.

There's the labor cost of doing package assembly! Very non trivial, very scary, very intimidating work. Just knowing that TSMC's Arizona chips have to be shipped back to Taiwan, assembled/packaged there, then potentially shipped where-ever is anec-data but a very real one. This just makes me respect Intel all the more, for having such interesting chips, such as Lakefield ~6 years ago, and their ongoing pursuit of this as a challenge.

So yeah, there are many optimal aspects to a single die. You're making a problem really hard by trying to split it up across chips.

It's not even clear why we want multi chip. As a consumer, if you had your choice, yes, you are right: we do want a big huge slab of a chip. There aren't many structural advantages for us, to get anything other than what we want, on one big chip.

And yet. Your cost savings can potentially be fantastically huge. Yields increase as your square millimeter-age shrinks, at some geometric or some such rate. Being able to push more advanced nodes that don't have the best yields and not have it be an epic fail allows for ongoing innovation & risk acceptance.

There's the modularity dividends. You can also tune appropriately: just as AMD keeps re-using the IOD across generations, Intel can innovate one piece at a time. This again is extremely liberating from a development perspective, to not have to get everything totally right, to be able to suffer faults, but not in the wafer, but at the design level, where maybe ok the new GPU isn't going to ship in 6 months after all, so we'll keep using the old one, but we can still get the rest of the upgrades out.

There's maybe some power wins. I don't really know how much difference it makes, but Intel just shutting down their CCD and using the on IOD (to use AMD's terms) tiny cores is relishably good. It's easy for me to imagine a big NPU or a big GPU that does likewise. I'm expecting similar from AMD with Medusa Point, their 2027 Big APU (but still sub Medusa Halo, which I cannot frelling wait to see).

I think Intel's been super super smart & has incredible vision about where chipmaking is headed, and has been super ahead of the curve. Alas their P-core has been around in one form or another for a long time & is a bit of a hog, and it's been a disaster for shipping new nodes. But I think they're set up well, and, as frustrating and difficult as it is leaving the convenience of a big chip APU, it feels like that time is here, and Intel's top of class at multi-chip, in a way few others are. We are seeing AMD have to do the same (Medua Point).

Optimal is a suboptimal statement. Only the Sith deal in absolutes, Anakin.

JohnBooty•23h ago
Yes. It's one of those things where even if you will never buy an Intel product, everybody in the world should be rooting for Intel to produce a real winner here.

Healthy Intel/GF/TSMC competition at the head of the pack is great for the tech industry, and the global economy at large.

Perhaps even more importantly, with armed conflict looming over Taiwan and TSMC... well, enough said.

fancyfredbot•1d ago
Two things stand out to me:

1) Battery life claims are specific and very impressive, possibly best in class 2) Performance claims are vague and uninspiring.

Either this is an awful press release or this generation isn't taking back the performance crown.

w-m•1d ago
“With Series 3, we are laser focused on improving power efficiency, adding more CPU performance, a bigger GPU in a class of its own, more AI compute and app compatibility you can count on with x86.” – Jim Johnson, Senior Vice President and General Manager, Client Computing Group, Intel

A laser focus on five things is either business nonsense or optics nonsense. Who was this written for?

HDThoreaun•1d ago
Well this is the consumer electronic showcase so I would say consumers who are looking at buying laptops
throwaway81523•1d ago
Can't we just focus on everything?
DannyBee•1d ago
I think you mean laser focus on everything. Maybe they have a prism.
simulator5g•19h ago
I’m sure they have something like a prism. Perhaps, a PRISM.
dudeinjapan•1d ago
Meanwhile they are NOT laser-focusing on doing more of Lunar Lake, with its on-package memory and glorious battery life.

Intel called it a “one-off mistake”, it’s the best mistake Intel ever made.

bryanlarsen•1d ago
Intel is claiming that Panther lake has 30% better battery life than Lunar Lake.
dudeinjapan•1d ago
Perhaps in a vacuum…

On package memory is claimed to be a 40% reduction in power consumption. To beat actual LL by 30%, it means the PL chip must actually be ~58% more efficient in an apples-to-apples non-SoC configuration.

Possible if they doped PL’s silicon with magic pixie dust.

wtallis•1d ago
> On package memory is claimed to be a 40% reduction in power consumption.

40% reduction in what power consumption? I don't think memory is usually responsible for even 40% of the total SoC + memory power, and bringing memory on-package doesn't make it consume negative power.

phonon•1d ago
Lunar Lake had a 40% reduction in PHY power use by using memory directly onto the processor packaging (MoP)...roughly going from 3-4 Watts to 2 Watts...
ac29•23h ago
Do you have more information on that? I have a meteor lake laptop (pre-Lunar Lake) and the entire machine averages ~4W most of the time, including screen, wifi, storage and everything else. So, I dont see how the CPU memory controller can use 3-4W unless it is for irrelevantly brief periods of time.
phonon•20h ago
That's peak usage. I don't know how reduced the PHY power usage is when there aren't any memory accesses. For comparison, the peak wattage of Meteor Lake is something like 30-60 Watts.

https://www.phoronix.com/review/intel-whiskeylake-meteorlake...

sidewndr46•1d ago
Somewhat ironically if they were laser focused using infared lasers, wouldn't that imply the company was not very specific at all? Infared is something like 700 nm, which would be huge in terms of transistors
davidmurdoch•23h ago
State of the art lithography currently uses extreme ultraviolet, which is 13.5nm. So maybe they are EUV laser-focused, just with many mirrors pointing it in 5 different directions?
undersuit•23h ago
Sounds very expensive.
davidmurdoch•23h ago
Only like $400 million per fab.
pritambarhate•23h ago
It's all the things Apple's processors are excellent at and AMD is not far behind Apple. So unless Intel delivers on all those things they can't hope to gain the market share they have lost.
alecco•1d ago
I really, really want Intel to do well. I like their open oneAPI for unified CPU-GPU programming. It would be nice to have some competition/alternative against NVIDIA and TSMC.

But I wont be investing time and money again on Intel while the same anti-engineering beancounter board is still there. For example, they never owned the recent Raptor Lake serious hardware issues and they never showed clients how this will never happen again.

https://en.wikipedia.org/wiki/Raptor_Lake#Instability_and_de... "Intel has decided not to halt sales or recall any units"

skystarman•1d ago
Great point. This Board nearly destroyed one of the world's great tech companies and they are STILL in charge after not being held accountable or admitting their mistakes over the past decade +

The only reason INTC isn't in a death spiral is because the US Govt. won't let that happen

etempleton•23h ago
They did reshuffle their board a bit after firing Pat to bring in some people with industry and domain expertise and not just academics / outside industry folks.
phkahler•1d ago
Clock speed? Hyperthreading? AVX512? APX?
icegreentea2•1d ago
There's a link to product brief PDF from the bottom of the press release. Page 9 and 10 have product tables. https://www.intel.com/content/www/us/en/content-details/8713...

P-Core Max Frequency 5.1 on the highest end, and the lowest at 4.4.

There's no hyperthreading: https://www.pcgamer.com/hardware/processors/now-youve-got-so...

Dunno about AVX and APX. They're not making it easy to find, so... probably not.

2OEH8eoCRo0•1d ago
AVX 2 according to:

https://www.intel.com/content/www/us/en/products/sku/245716/...

aseipp•21h ago
No AVX512, client SKUs are just going to go straight to APX/AVX10, and they are confirmed for Nova Lake which is 2H 2026 (it will probably be "Core Ultra Series 4" or whatever I guess).
kleinmatic•1d ago
I wonder how much of the funding that led to this came from the Biden-era Chips & Science Act? I can't find a straight answer amid the AI slop and marketing hype about both of them.

Update: Looks like Trump admin converted billions in unpaid CHIPS act grants into an equity in Intel last year https://techhq.com/news/intel-turnaround-strategy-panther-la...

daneel_w•1d ago
Is Intel's 18A (~2nm) their own hardware or did they acquire ASML equipment for this plant?
smallmancontrov•1d ago
Intel never made EUV machines, never claimed to make EUV machines, never aspired to make EUV machines, and have run multiple marketing campaigns bragging about the ASML EUV machines they purchased.
wtallis•1d ago
And even prior to EUV, Intel didn't make their own lithography tools.
T-A•1d ago
https://www.tomshardware.com/tech-industry/semiconductors/in...
GeorgeOldfield•1d ago
x86? max 96GB RAM? is this a joke?
wtallis•1d ago
It's 96 GB max when using LPDDR5, or 128 GB when using DDR5. These are consumer chips with the same 128-bit memory bus width that x86 consumer chips have been using for many years, and this is a laptop-specific product line so they're not trying to squeeze in as many ranks of memory as possible.
etempleton•23h ago
This is a laptop specific product. The next desktop variant will be later in 2026 or 2027 and I imagine that will support more Ram.