frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Astral to Join OpenAI

https://astral.sh/blog/openai
462•ibraheemdev•2h ago•238 comments

OpenBSD: PF queues break the 4 Gbps barrier

https://undeadly.org/cgi?action=article;sid=20260319125859
60•defrost•1h ago•19 comments

Juggalo Makeup Blocks Facial Recognition Technology (2019)

https://consequence.net/2019/07/juggalo-makeup-facial-recognition/
98•speckx•2h ago•33 comments

Consensus Board Game

https://matklad.github.io/2026/03/19/consensus-board-game.html
23•surprisetalk•1h ago•0 comments

Afroman found not liable in defamation case

https://nypost.com/2026/03/18/us-news/afroman-found-not-liable-in-bizarre-ohio-defamation-case/
675•antonymoose•5h ago•273 comments

The Shape of Inequalities

https://www.andreinc.net/2026/03/16/the-shape-of-inequalities/
6•nomemory•54m ago•0 comments

Conway's Game of Life, in real life

https://lcamtuf.substack.com/p/conways-game-of-life-in-real-life
253•surprisetalk•11h ago•67 comments

Afroman Wins Civil Trial over Use of Police Raid Footage in His Music Videos

https://www.nytimes.com/2026/03/19/us/afroman-trial-lemon-cake-verdict.html
239•pseudolus•3h ago•27 comments

Pretraining Language Models via Neural Cellular Automata

https://hanseungwook.github.io/blog/nca-pre-pre-training/
47•shmublu•4d ago•11 comments

macOS 26 breaks custom DNS settings including .internal

https://gist.github.com/adamamyl/81b78eced40feae50eae7c4f3bec1f5a
6•adamamyl•24m ago•1 comments

Nvidia greenboost: transparently extend GPU VRAM using system RAM/NVMe

https://gitlab.com/IsolatedOctopi/nvidia_greenboost
424•mmastrac•3d ago•119 comments

Eniac, the First General-Purpose Digital Computer, Turns 80

https://spectrum.ieee.org/eniac-80-ieee-milestone
71•baruchel•9h ago•32 comments

OpenRocket

https://openrocket.info/
668•zeristor•4d ago•114 comments

Warranty Void If Regenerated

https://nearzero.software/p/warranty-void-if-regenerated
453•Stwerner•18h ago•275 comments

OpenAI to Acquire Astral

https://openai.com/index/openai-to-acquire-astral/
154•meetpateltech•2h ago•78 comments

How many branches can your CPU predict?

https://lemire.me/blog/2026/03/18/how-many-branches-can-your-cpu-predict/
54•ibobev•2h ago•15 comments

2% of ICML papers desk rejected because the authors used LLM in their reviews

https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
148•sergdigon•5h ago•131 comments

Gluon: Explicit Performance

https://www.lei.chat/posts/gluon-explicit-performance/
8•matt_d•3d ago•0 comments

'Your Frustration Is the Product'

https://daringfireball.net/2026/03/your_frustration_is_the_product
189•llm_nerd•3h ago•118 comments

Stdwin: Standard window interface by Guido Van Rossum [pdf]

https://ir.cwi.nl/pub/5998/5998D.pdf
62•ivanbelenky•2d ago•36 comments

Austin’s surge of new housing construction drove down rents

https://www.pew.org/en/research-and-analysis/articles/2026/03/18/austins-surge-of-new-housing-con...
668•matthest•15h ago•806 comments

A Preview of Coalton 0.2

https://coalton-lang.github.io/20260312-coalton0p2/
9•varjag•4d ago•0 comments

LotusNotes

https://computer.rip/2026-03-14-lotusnotes.html
144•TMWNN•4d ago•74 comments

A sufficiently detailed spec is code

https://haskellforall.com/2026/03/a-sufficiently-detailed-spec-is-code
503•signa11•13h ago•264 comments

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

https://github.com/alainnothere/llm-circuit-finder
191•xlayn•18h ago•71 comments

The next fight over the use of facial recognition could be in the supermarkets

https://www.politico.com/newsletters/digital-future-daily/2026/03/16/the-facial-recognition-groce...
31•speckx•2h ago•7 comments

Wander – A tiny, decentralised tool to explore the small web

https://susam.net/wander/
318•susam•1d ago•78 comments

Nvidia NemoClaw

https://github.com/NVIDIA/NemoClaw
350•hmokiguess•1d ago•230 comments

Autoresearch for SAT Solvers

https://github.com/iliazintchenko/agent-sat
150•chaisan•14h ago•30 comments

The math that explains why bell curves are everywhere

https://www.quantamagazine.org/the-math-that-explains-why-bell-curves-are-everywhere-20260316/
177•ibobev•2d ago•107 comments
Open in hackernews

How many branches can your CPU predict?

https://lemire.me/blog/2026/03/18/how-many-branches-can-your-cpu-predict/
54•ibobev•2h ago

Comments

withinboredom•1h ago
Before switching to a hot and branchless code path, I was seeing strangely lower performance on Intel vs. AMD under load. Realizing the branch predictor was the most likely cause was a little surprising.
stephencanon•1h ago
Enlarging a branch predictor requires area and timing tradeoffs. CPU designers have to balance branch predictor improvements against other improvements they could make with the same area and timing resources. What this tells you is that either Intel is more constrained for one reason or another, or Intel's designers think that they net larger wins by deploying those resources elsewhere in the CPU (which might be because they have identified larger opportunities for improvement, or because they are basing their decision making on a different sample of software, or both).
bee_rider•1h ago
I guess the generate_random_value function uses the same seed every time, so the expectation is that the branch predictor should be able to memorize it with perfect accuracy.

But the memorization capacity of the branch predictor must be a trade-off, right? I guess this generate_random_value function is impossible to predict using heuristics, so I guess the question is how often we encounter 30k long branch patterns like that.

Which isn’t to say I have evidence to the contrary. I just have no idea how useful this capacity actually is, haha.

bluGill•48m ago
30k long patterns are likely rare. However in the real world there is a lot of code with 30k different branches that we use several times and so the same ability memorize/predict 30k branches is useful even though this particular example isn't realistic it still looks good.

Of course we can't generalize this to Intel bad. This pattern seems unrealistic (at least at a glance - but real experts should have real data/statistics on what real code does not just my semi-educated guess), and so perhaps Intel has better prediction algorithms for the real world that miss this example. Not being an expert in the branches real world code takes I can't comment.

bee_rider•41m ago
Yeah, I’m also not an expert in this. Just had enough architecture classes to know that all three companies are using cleverer branch predictors than I could come up with, haha.

Another possibility is that the memorization capacity of the branch predictors is a bottleneck, but a bottleneck that they aren’t often hitting. As the design is enhanced, that bottleneck might show up. AMD might just have most recently widened that bottleneck.

Super hand-wavey, but to your point about data, without data we can really only hand-wave anyway.

IcePic•31m ago
https://chromium.googlesource.com/chromiumos/third_party/gcc... has some looong select/case things with lots of ifs in them, but I don't think they would hit 30k.
rayiner•1h ago
Using random values defeats the purpose of the branch predictor. The best branch predictor for this test would be one that always predicts the branch taken or not taken.
dundarious•39m ago
There will be runs of even and runs of odd outputs from the rng. This benchmark tests how well does the branch predictor "retrain" to the current run. It is a good test of this adaptability of the predictor.

The benchmark is still narrow in focus, and the results don't unequivocally mean AMD's predictor is overall "the best".

gpderetta•24m ago
The author is running the benchmark multiple times with the same random seed to discover how long a pattern can the predictor learn.
OskarS•49m ago
Hmm, that's interesting. The code as written only has one branch, the if statement (well, two, the while loop exit clause as well). My mental model of the branch predictor was that for each branch, the CPU maintained some internal state like "probably taken/not taken" or "indeterminate", and it "learned" by executing the branch many times.

But that's clearly not right, because apparently the specific data it's branching off matters too? Like, "test memory location X, and branch at location Y", and it remembers both the specific memory location and which specific branch branches off of it? That's really impressive, I didn't think branch predictors worked like that.

Or does it learn the exact pattern? "After the pattern ...0101101011000 (each 0/1 representing the branch not taken/taken), it's probably 1 next time"?

gpderetta•26m ago
Typical branch predictors can both learns patterns (even very long patterns) and use branch history (the probability of a branch being taken depends on the path taken to reach that branch). They don't normally look at data other than branch addresses (and targets for indirect branches).
jeffbee•19m ago
They can't. The data that would be needed isn't available at the time the prediction is made.
1718627440•9m ago
Yeah, otherwise you wouldn't need to predict anything.
LPisGood•25m ago
There are many branch prediction algorithms out there. They range from fun architecture papers that try to use machine learning to static predictors that don’t even adapt to the prior outcomes at all.
Night_Thastus•11m ago
AMD CPUs have been killing it lately, but this benchmark feels quite artificial.

It's a tiny, trivial example with 1 branch that behaves in a pseudo-random way (random, but fixed seed). I'm not sure that's a really good example of real world branching.

How would the various branch predictors perform when the branch taken varies from 0% likely to 100% likely, in say, 5% increments?

How would they perform when the contents of both paths are very heavy, which involves a lot of pipeline/SE flushing?

How would they perform when many different branches all occur in sequence?

Without info like that, this feels a little pointless.