frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Interviewing Intel's Chief Architect of x86 Cores

https://chipsandcheese.com/p/interviewing-intels-chief-architect
146•ryandotsmith•4mo ago

Comments

brucehoult•3mo ago
Oh em gee ... what a contentless interview.

"We made it wider and deeper".

Gosh. Why didn't anyone think about doing that before?

saagarjha•3mo ago
Because that costs power and area.
brucehoult•3mo ago
And it still does.

And the last generation was wider and deeper than the one before it, also costing power and area.

The question that should be asked ... but which would never be answered ... is "What was it that you changed that REQUIRED and ALLOWED you to go wider and deeper?"

It's not a new process node every time.

Theres no NEED to have a massive reorder buffer unless you can decode and dispatch that number of instructions in the time it takes for a load to arrive from whichever level of memory hierarchy you're optimising for. And there's no POINT if you're often going to get a misprediction in that number of instructions. Ok, so wider decode is one component of that. Is there a difference in memory latency as well? Wider decode past 3 or 4 instructions increasingly means that you can't just end your packet of decoded instructions at the first branch -- as you get wider you're increasingly going to have to both parse past a conditional branch, and then have to predict more than one branch in the same decode cycle. You'll also get into branches that jump to other instructions in the same decode group (either forward or backward).

There are all kinds of complications there, with no doubt interesting solutions, that go far beyond "we went wider and deeper".

porridgeraisin•3mo ago
https://chatgpt.com/share/68ef6cc3-1c48-8013-a545-905af89fbc...

I asked chatgpt to give a contentful summary of the interview, it seems to be more or less accurate, albeit surface level. If anyone is interested.

It gets the "why" but not the "how". Maybe someone here can prompt it further to speculate on the "how". I don't think I'll be able to verify its output well enough to do that.

mort96•3mo ago
I'm not sure what you expect to get out of this. How do you make a "contentful summary" of a contentless interview? Where do you get the content from?
porridgeraisin•3mo ago
By using general knowledge to write e.g what adding a store address unit accomplishes in the context of the rest of the interview. Did you even read the chat?
MBCook•3mo ago
That doesn’t add useful content. It adds definitions. That’s just padding.

Only the interviewee can add content.

I’m also of the opinion “I asked ChatGPT for a summary” type comments are very low effort and don’t add to the discussion.

delfinom•3mo ago
Unfortunately our AI future involves many more people refusing to use their brains for more than a few seconds and depend on AI to generate summaries without knowing what parts are hallucinated or even the point.
porridgeraisin•3mo ago
Or, they read the transcription, didn't have time to see the video interview, and used an LLM to augment it to make sense as prose as an aid to the casual reader. I know a fair bit about the topic at hand:) but not enough to be gung-ho about it on a tech forum frequented by legends.

If you actually went through the LLM output, found problems with it, and then commented this, it would be fine. Until then it's an unfounded accusation.

MBCook•3mo ago
If they want to use an LLM, they can. I don’t see what posting it adds.
porridgeraisin•3mo ago
"If people want to read an article, they can search for it themselves. I don't see what posting it here adds"
porridgeraisin•3mo ago
> don't add to the discussion

For sure, I'm against it as well, it's just that in this case the transcription provided in the article was so terse that it was more or less useless. LLMs are good at expanding it to make more sense as prose. If you open the link, that is what the prompt asks it do as well. I'd argue that's useful and not just padding.

> Add content

Yes, I mentioned this in my original comment "not the why" "surface level" etc

jng•3mo ago
He is no Jim Keller, and the mostly[1] automated transcript makes it read cringe, but it is not at all devoid of content.

Some examples of very interesting, non-obvious content:

* Even if store ports are kept fixed (2 in his example), adding store address generators (up to 4 in his example) actually improves performance, because it frees up load port dependencies. * Within the same core, they use two different styles of load/address address contention mechanisms which he describes as two tables, one with explicit "allows" and the other one with explicit "denies" -- which of course end up converging (I understand it refers to two different encodings which vary in what is stored). * Between cores, they have completely separate teams which reach different designs for things like this. * It was interesting to me to discover how isolated the different core design teams work (which makes sense) * It was interesting to me to picture the load/store address contention subsystem, which must be quite complex and needs to be really fast.

And I stop listing, re different types of workloads, gaming workloads being similar to DB workloads, and even more similar between them than to SPEC benchmarks and so on.

Just go read the interview if you're interested in CPU design!

[1] mostly automated: at least the dialog name labels seem to be hand-edited, as one of them has a typo

brucehoult•3mo ago
You're right the things you list do contain fresh information. Though the similarity between game logic and business logic is not a new observation ... and web browser in the same ballpark too. I think it's a code size vs data size thing. SPEC programs mostly have a relatively small amount of code, gcc being an obvious exception. And I guess Blender in 2017 FP.
pixelpoet•3mo ago
I did the transcription, but not the dialogs and labels etc. So I can say with certainty that it wasn't automated :)

What made the transcription "cringe"? I'd like to believe it's accurate.

jng•3mo ago
Oops, sorry about carelessly throwing the "cringe" label at that. Thanks for the transcript which allowed me to enjoy the content, which I did find very interesting.

I haven't watched the video so I am not sure how he actually talks, but what read cringe to me was things like the following paragraph:

"Stephen Robinson: Yeah. So let’s, let’s break it down into address generation versus execution. So, when you have three load execution ports, you need three load address generators. And so that’s there. On the store side, we have four store address generation units. But we only sustain two stores into the data cache."

Which reads weird. "let's" repeated twice, probably a stutter, could be transcribed just once. The "So" or "And so" the interviewee uses all the time at the start of sentences can also be removed for clearer and easier reading most of the time, without loss of meaning. Some sentences can almost be removed completely as they provide no actual information. The previous paragraph could be transcribed like this:

"Stephen Robinson: Let’s break it down into address generation versus execution. When you have three load execution ports, you need three load address generators. That’s there. On the store side, we have four store address generation units. But we only sustain two stores into the data cache."

I hesitate to remove "That's there." so I left it. But everything else I removed, it makes it clearer, and I think I'm not being unfaithful to the original. Removing the duplicate "let's" is a given as it's normal to stutter when speaking, but you don't really want to transcribe that unless the goal is to transcribe the talking imperfections we all have. And all the other things I removed, "Yeah", "So", "And so", are basically the same type of thing.

I thought this was automated because it had so many of the meaningless go-to words and hesitations from the original. Now that you mention it, automated transcription would probably never have produced something this good. And otherwise we are talking about stylistic preference here, always subjective -- although I'd definitely prefer the style of transcription suggest here.

Thanks again. I read chips and cheese with interest, quite often, and enjoy it quite a lot. Keep up the good work. And sorry for the careless put-down.

misja111•3mo ago
Well isn't Intel mostly alive by capital injections from the US government and NVidia nowadays? How much content did you expect from a straw puppet.
norin•3mo ago
yeah strange sort of
BoredPositron•3mo ago
Odd read especially after that preamble >> The transcript has been edited for readability and conciseness.

Not a lot of novel information either.

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
1•PaulHoule•3m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•3m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•4m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
1•Brajeshwar•4m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•6m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•6m ago•0 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
4•c420•7m ago•0 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•7m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
1•HotGarbage•7m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•7m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•9m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
3•surprisetalk•12m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
3•TheCraiggers•13m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•14m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
7•doener•15m ago•2 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•16m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•16m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•17m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•18m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
2•elsewhen•21m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•22m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•26m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•26m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•27m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•27m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•27m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•27m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•29m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•29m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•30m ago•0 comments