frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

How many branches can your CPU predict?

https://lemire.me/blog/2026/03/18/how-many-branches-can-your-cpu-predict/
28•chmaynard•1d ago

Comments

withinboredom•12h ago
Before switching to a hot and branchless code path, I was seeing strangely lower performance on Intel vs. AMD under load. Realizing the branch predictor was the most likely cause was a little surprising.
stephencanon•12h ago
Enlarging a branch predictor requires area and timing tradeoffs. CPU designers have to balance branch predictor improvements against other improvements they could make with the same area and timing resources. What this tells you is that either Intel is more constrained for one reason or another, or Intel's designers think that they net larger wins by deploying those resources elsewhere in the CPU (which might be because they have identified larger opportunities for improvement, or because they are basing their decision making on a different sample of software, or both).
pbsd•7h ago
I mean, he's comparing 2024 Zen 5 and M4 against two generations behind 2022 Intel Raptor Lake. The Lion Cove should be roughly on par with the M4 on this test.
stephencanon•5h ago
That would fall under "more constrained", due to process limits.
bee_rider•12h ago
I guess the generate_random_value function uses the same seed every time, so the expectation is that the branch predictor should be able to memorize it with perfect accuracy.

But the memorization capacity of the branch predictor must be a trade-off, right? I guess this generate_random_value function is impossible to predict using heuristics, so I guess the question is how often we encounter 30k long branch patterns like that.

Which isn’t to say I have evidence to the contrary. I just have no idea how useful this capacity actually is, haha.

bluGill•11h ago
30k long patterns are likely rare. However in the real world there is a lot of code with 30k different branches that we use several times and so the same ability memorize/predict 30k branches is useful even though this particular example isn't realistic it still looks good.

Of course we can't generalize this to Intel bad. This pattern seems unrealistic (at least at a glance - but real experts should have real data/statistics on what real code does not just my semi-educated guess), and so perhaps Intel has better prediction algorithms for the real world that miss this example. Not being an expert in the branches real world code takes I can't comment.

bee_rider•11h ago
Yeah, I’m also not an expert in this. Just had enough architecture classes to know that all three companies are using cleverer branch predictors than I could come up with, haha.

Another possibility is that the memorization capacity of the branch predictors is a bottleneck, but a bottleneck that they aren’t often hitting. As the design is enhanced, that bottleneck might show up. AMD might just have most recently widened that bottleneck.

Super hand-wavey, but to your point about data, without data we can really only hand-wave anyway.

IcePic•11h ago
https://chromium.googlesource.com/chromiumos/third_party/gcc... has some looong select/case things with lots of ifs in them, but I don't think they would hit 30k.
rayiner•12h ago
Using random values defeats the purpose of the branch predictor. The best branch predictor for this test would be one that always predicts the branch taken or not taken.
dundarious•11h ago
There will be runs of even and runs of odd outputs from the rng. This benchmark tests how well does the branch predictor "retrain" to the current run. It is a good test of this adaptability of the predictor.

The benchmark is still narrow in focus, and the results don't unequivocally mean AMD's predictor is overall "the best".

gpderetta•11h ago
The author is running the benchmark multiple times with the same random seed to discover how long a pattern can the predictor learn.
OskarS•11h ago
Hmm, that's interesting. The code as written only has one branch, the if statement (well, two, the while loop exit clause as well). My mental model of the branch predictor was that for each branch, the CPU maintained some internal state like "probably taken/not taken" or "indeterminate", and it "learned" by executing the branch many times.

But that's clearly not right, because apparently the specific data it's branching off matters too? Like, "test memory location X, and branch at location Y", and it remembers both the specific memory location and which specific branch branches off of it? That's really impressive, I didn't think branch predictors worked like that.

Or does it learn the exact pattern? "After the pattern ...0101101011000 (each 0/1 representing the branch not taken/taken), it's probably 1 next time"?

gpderetta•11h ago
Typical branch predictors can both learns patterns (even very long patterns) and use branch history (the probability of a branch being taken depends on the path taken to reach that branch). They don't normally look at data other than branch addresses (and targets for indirect branches).
jeffbee•11h ago
They can't. The data that would be needed isn't available at the time the prediction is made.
1718627440•11h ago
Yeah, otherwise you wouldn't need to predict anything.
LPisGood•11h ago
There are many branch prediction algorithms out there. They range from fun architecture papers that try to use machine learning to static predictors that don’t even adapt to the prior outcomes at all.
rayiner•10h ago
Your mental model is close. Predictors generally work by having some sort of table of predictions and indexing into that table (usually using some sort of hashing) to obtain the predictions.

The simplest thing to do is use the address of the branch instruction as the index into the table. That way, each branch instruction maps onto a (not necessarily unique) entry in the table. Those entries will usually be a two-bit saturating counter that predicts either taken, not taken, or unknown.

But you can add additional information to the key. For example, a gselect predictor maintains a shift register with the outcome of the last M branches. Then it combines that shift register along with the address of the branch instruction to index into the table: https://people.cs.pitt.edu/~childers/CS2410/slides/lect-bran... (page 9). That means that the same branch instruction will map to multiple entries of the table, depending on the pattern of branches in the shift register. So you can get different predictions for the same branch depending on what else has happened.

That, for example, let’s you predict small-iteration loops. Say you have a loop inside a loop, where the inner loop iterates 4 times. So you’ll have a taken branch (back to the loop header) three times but then a not-taken branch on the fourth. If you track that in the branch history shift register, you might get something like this (with 1s being taken branches):

11101110

If you use this to index into a large enough branch table, the table entries corresponding to the shift register ending in “0111” will have a prediction that the branch will be not taken (i.e. the next outcome will be a 0) while the table entries corresponding to the shift register ending in say “1110” will have a prediction that the next branch will be taken.

So the basic principle of having a big table of branch predictions can be extended in many ways by using various information to index into the table.

jcalvinowens•9h ago
Check out [1]: it has the most thorough description of branch prediction I've ever seen (chapter 3), across a lot of historical and current CPUs. It is mostly empirical, so you do have to take it with a grain of salt sometimes (the author acknowledges this).

Supposedly the branch prediction on modern AMD CPUs is far more sophisticated, based on [2] (a citation pulled from [1]).

[1] https://www.agner.org/optimize/microarchitecture.pdf

[2] https://www.cs.utexas.edu/%7Elin/papers/hpca01.pdf

Night_Thastus•11h ago
AMD CPUs have been killing it lately, but this benchmark feels quite artificial.

It's a tiny, trivial example with 1 branch that behaves in a pseudo-random way (random, but fixed seed). I'm not sure that's a really good example of real world branching.

How would the various branch predictors perform when the branch taken varies from 0% likely to 100% likely, in say, 5% increments?

How would they perform when the contents of both paths are very heavy, which involves a lot of pipeline/SE flushing?

How would they perform when many different branches all occur in sequence?

How costly are their branch mispredictions, relative to one another?

Without info like that, this feels a little pointless.

jeffbee•10h ago
He isn't trying to determine how well it works. He's trying to determine how large it is.
Night_Thastus•10h ago
Their post gives the impression that clearly AMD's branch prediction is better, because this one number is bigger. "Once more I am disappointed by Intel"

While it could very well be true that the AMD branch predictor is straight-up better, the data they provided is insufficient for that conclusion.

vlovich123•5h ago
You may want to look up who Daniel Lemire is and the work he's done. What he's basically saying is "in the totality of things I've examined where Intel has come up short, this is another data point that is in line with their performance across the board". It's not "this one benchmark proves Intel sucks hurr hurr" - it's saying it's yet another data point supporting the perception that Intel is struggling against the competition.
bee_rider•9h ago
It is a tiny example, but it measures something. It doesn’t handle the other performance characteristics you mention, but it has the advantage of being a basically pure measurement of the memorization ability of the branch predictors.

The blog post is not very long—not much longer than some of the comments we’ve written here about it. So, I think it is reasonable to expect the reader to be able to hold the whole thing in their head, and understand it, and understand that it is extremely targeted at a specific metric.

user070223•10h ago
Does any JIT/AOT/hot code optimization/techniques/compilers/runtime takes into account whether the branch prediction is saturated and try to recompile to go branchless
BoardsOfCanada•10h ago
In general branchless is better for branches that can't be predicted 99.something %, saturating the branch prediction like this benchmark isn't a concern. The big concern is mispredicting a branch, then executing 300 instructions and having to throw them away once the branch is actually executed.
piinbinary•6h ago
How does the benchmark tell how many branches were mispredicted? Is that something the processor exposes?
ErikCorry•1h ago
Yeah performance counters
Paul_Clayton•1h ago
By only testing one static branch, it is possible that the performance of the Intel Emerald Rapids predictor is not representative of a more realistic workload. If path information is used to index the predictor in addition to global (taken/not taken) branch history without xoring with the global history (or fulling mingling these different data) or if the branch address is similarly not fully scrambled with the global history, using only one branch might result in predictor storage being unused (never indexed). Either mechanism might be useful for reducing tag overhead while maintaining fewer aliases. Another possibility is that the associativity of the tables does not allow tags for the same static branch to differ.

(Tags could be made to differ by, e.g., XORing a limited amount of global history with the hash of the address.)

It is also possible that the AMD Zen 5 and Apple M4 have similar unused predictor capacity and simply have much larger predictors.

I did not think even TAGE predictors used 5k branch history, so there may be some compression of the data (which is only pseudorandom).

It might be interesting to unroll the loop (with sufficient spacing between branches to ensure different indexing) to see if such measurably effected the results.

Of course, since "write to buffer" is just a store and increment and the compiler should be able to guarantee no buffer overflow (buffer size allocated for worst case) and that the memory store has no side effects, the branch could be predicated by selecting either new value to be stored or the old value and always storing. This would be a little extra work and might have store queue issues (if not all store queue entries can have the same address but different version numbers), so it might not be a safe optimization.

barbegal•33m ago
This is good work. I wish branch predictor were better reverse engineered so CPU simulation could be improved. It would be much better to be able to accurately predict how software will work on other processors in software simulation rather than having to go out and buy hardware to test on (which is the way we still have to do things in 2026)

Push events into a running session with channels

https://code.claude.com/docs/en/channels
172•jasonjmcghee•2h ago•92 comments

Astral to Join OpenAI

https://astral.sh/blog/openai
1231•ibraheemdev•13h ago•763 comments

Google details new 24-hour process to sideload unverified Android apps

https://arstechnica.com/gadgets/2026/03/google-details-new-24-hour-process-to-sideload-unverified...
533•0xedb•9h ago•623 comments

Drugwars for the TI-82/83/83 Calculators

https://gist.github.com/mattmanning/1002653/b7a1e88479a10eaae3bd5298b8b2c86e16fb4404
34•robotnikman•2h ago•18 comments

Cockpit is a web-based graphical interface for servers

https://github.com/cockpit-project/cockpit
179•modinfo•5h ago•107 comments

Full Disclosure: A Third (and Fourth) Azure Sign-In Log Bypass Found

https://trustedsec.com/blog/full-disclosure-a-third-and-fourth-azure-sign-in-log-bypass-found
14•nyxgeek•1h ago•0 comments

How the Turner twins are mythbusting modern technical apparel

https://www.carryology.com/insights/how-the-turner-twins-are-mythbusting-modern-gear/
121•greedo•2d ago•66 comments

Return of the Obra Dinn: spherical mapped dithering for a 1bpp first-person game

https://forums.tigsource.com/index.php?topic=40832.msg1363742#msg1363742
248•PaulHoule•3d ago•36 comments

Show HN: Three new Kitten TTS models – smallest less than 25MB

https://github.com/KittenML/KittenTTS
329•rohan_joshi•10h ago•113 comments

How many branches can your CPU predict?

https://lemire.me/blog/2026/03/18/how-many-branches-can-your-cpu-predict/
28•chmaynard•1d ago•29 comments

Noq: n0's new QUIC implementation in Rust

https://www.iroh.computer/blog/noq-announcement
152•od0•8h ago•19 comments

Launch HN: Voltair (YC W26) – Drone and charging network for power utilities

59•wweissbluth•9h ago•21 comments

4Chan mocks £520k fine for UK online safety breaches

https://www.bbc.com/news/articles/c624330lg1ko
279•mosura•11h ago•444 comments

“Your frustration is the product”

https://daringfireball.net/2026/03/your_frustration_is_the_product
443•llm_nerd•14h ago•256 comments

Clockwise acquired by Salesforce and shutting down next week

https://www.getclockwise.com
81•nigelgutzmann•6h ago•47 comments

EsoLang-Bench: Evaluating Genuine Reasoning in LLMs via Esoteric Languages

https://esolang-bench.vercel.app/
65•matt_d•5h ago•31 comments

Wayland set the Linux Desktop back by 10 years?

https://omar.yt/posts/wayland-set-the-linux-desktop-back-by-10-years
161•omarroth•2h ago•123 comments

The day I discovered type design

https://www.marksimonson.com/notebook/view/the-day-i-discovered-type-design/
44•ingve•3h ago•5 comments

Waymo Safety Impact

https://waymo.com/safety/impact/
232•xnx•6h ago•225 comments

Be intentional about how AI changes your codebase

https://aicode.swerdlow.dev
70•benswerd•5h ago•28 comments

Cover Flow with Modern CSS: Scroll-Driven Animations in Action (2025)

https://addyosmani.com/blog/coverflow/
4•andsoitis•4d ago•0 comments

NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute

https://qlabs.sh/10x
102•sdpmas•7h ago•19 comments

Juggalo makeup blocks facial recognition technology (2019)

https://consequence.net/2019/07/juggalo-makeup-facial-recognition/
238•speckx•13h ago•143 comments

Bombarding gamblers with offers greatly increases betting and gambling harm

https://www.bristol.ac.uk/news/2026/march/bombarding-gamblers-with-offers-greatly-increases-betti...
83•hhs•3h ago•70 comments

From Oscilloscope to Wireshark: A UDP Story (2022)

https://www.mattkeeter.com/blog/2022-08-11-udp/
79•ofrzeta•7h ago•16 comments

Launch HN: Canary (YC W26) – AI QA that understands your code

39•Visweshyc•10h ago•13 comments

Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster

https://blog.skypilot.co/scaling-autoresearch/
127•hopechong•9h ago•58 comments

OpenBSD: PF queues break the 4 Gbps barrier

https://undeadly.org/cgi?action=article;sid=20260319125859
178•defrost•12h ago•56 comments

My Random Forest Was Mostly Learning Time-to-Expiry Noise

https://illya.sh/threads/out-of-sample-permutation-feature-importance-for-random
20•iluxonchik•3d ago•3 comments

An update on Steam / GOG changes for OpenTTD

https://www.openttd.org/news/2026/03/19/steam-changes-update
279•jandeboevrie•9h ago•192 comments