frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
140•theblazehen•2d ago•41 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
667•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•32 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
16•kaonwarb•3d ago•19 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
222•dmpetrov•14h ago•117 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
26•jesperordrup•4h ago•16 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
493•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
43•helloplanets•4d ago•41 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•4 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•138 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•43 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
182•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

Performance Debugging with LLVM-mca: Simulating the CPU

https://johnnysswlab.com/performance-debugging-with-llvm-mca-simulating-the-cpu/
33•signa11•7mo ago

Comments

camel-cdr•7mo ago
One thing to keep in mind with llvm-mca is that not all processors use their own scheduling model and different scheduling models are more or less accurate.

E.g. Cortex-A72 uses the Cortex-A57 model, as does Cortex-A76, even Cortex-A78.

The neoverse V1 model has an issue width of 15, meanwhile the neoverse V2 (and V3, which uses V2) has an issue width of 6.

MobiusHorizons•7mo ago
Are you saying the model used to simulate many different cpu models is the same, which makes comparing CPUs harder? Or are you saying the model is not accurate?

It’s an interesting point that the newer neoverse cores use a model with smaller issue width. Are you saying this doesn’t match reality? If so do you have any idea why they model it that way?

camel-cdr•7mo ago
> Are you saying the model used to simulate many different cpu models is the same, which makes comparing CPUs harder? Or are you saying the model is not accurate?

Both, but mostly the former. You can view the scheduling models used for a given CPU here: https://github.com/llvm/llvm-project/blob/main/llvm/lib/Targ...

    * CortexA53Model used for: A34, A35, A320, a53, a65, a65ae
    * CortexA55Model used for: A55, r82, r82ae
    * CortexA510Model used for: a510, a520, a520ae
    * CortexA57Model used for: A57, A72, A73, A75, A76, A76ae, A77, A76, A78ae, A78c
    * NeoverseN2Model used for: a710, a715, a720, a720ae, neoverse-n2
    * NeoverseV1Model used for: X1, X1c, neoverse-v1/512tvb
    * NeoverseV2Model used for: X2, X3, X4, X295, grace, neoverse-v2/3/v3ae
    * NeoverseN3Model used for: neoverse-n3
It's even worse for Apple CPUs, all apple CPUs, from apple-a7 to apple-m4 use the same "CycloneModel" of a 6-issue out-of-order core from 2013.

There are more fine-grained target-specific feature flags used, e.g. for fusion, but the base scheduling model often isn't remotely close to the actual processor.

> It’s an interesting point that the newer neoverse cores use a model with smaller issue width. Are you saying this doesn’t match reality? If so do you have any idea why they model it that way?

Yes, I opened an issue about the Neoverse cores since then an independent PR adjusted the V2 down from 16 wide to a more realistic 8 wide: https://github.com/llvm/llvm-project/issues/136374

Part of the problem is that LLVMs scheduling model can't represent all properties of the CPU.

The issue width for those cores seems to be set to the maximum number of uops the core can execute at once. If you look at the Neoverse V1 micro architecture, it indeed has 15 independent issue ports: https://en.wikichip.org/w/images/2/28/neoverse_v1_block_diag...

But notice how it can only decode 8 instructions (5 if you exclude MOP cache) per cycle. This is partially because some operations take multiple cycles before the port can execute new instructions, so having more execution ports is still a gain in practice. The other reason is uop cracking. Complex addressing modes and things like load/store pairs are cracked into multiple uops, which execute on separate ports.

The problem is that LLVMs IssueWidth parameter is used to model, decode and issue width. The execution port count is derived from the ports specified in the scheduling model itself, which basically are correct.

---

The reason for all of this is, if I had to guess, that modeling instruction scheduling doesn't matter all that much for codegen on OoO cores. The other one is that just putting in the "real"/theoretical numbers doesn't automatically result in the best codegen.

It does matter, however, if you use it to visualize how a core would execute instructions.

The main point I want to make, is that you shouldn't use llvm-mca with -mcpu=apple-m4 and use it to compare against -mcpu=znver5 and expect any reasonable answers. Just be sure to check the source, so you realize you are actually comparing a scheduling model based on the apple Cyclone (2013) core and the Zen4 core (2022).

mshockwave•7mo ago
> that modeling instruction scheduling doesn't matter all that much for codegen on OoO cores.

yeah scheduling quality usually has a weaker connection to the performance of OoO cores. Though I would also like to point out:

  1. in-order cores still heavily relies on scheduling quality
  2. Issue width is actually a big thing in MachineScheduler regardless of in-order or out-of-order cores. So the problem you outlined above w.r.t different implementations of uops cracking is indeed quite relevant
  3. MachineScheduler does not use the BufferSize -- which more or less mirrors the issue queue size of each pipe -- at all for out-of-order core. MicroOpBufferSize, which models the unified reservation station / ROB size, only got used in a really specific place. However, these parameters matter (much) more for llvm-mca
camel-cdr•7mo ago
@dang The website shows this comment as written 50 minutes ago, but I wrote it over a day ago.
dzaima•7mo ago
The timestamps just get moved around sometimes: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
tomhow•7mo ago
Yes, as dzaima wrote, the displayed timestamps on comments are adjusted if a submission has grace time added, which happens when we put a story in the second chance pool [1]. This is because the time-since-posting is a signifiant factor in the gravity calculation that pulls a submission's ranking down over time.

I know it's confusing, but if we left all the comment timestamps as seeming much older than the submission, then it would be even more confusing to other readers. (That said, I generally try to avoid doing this, given the confusion it causes).

[1] https://news.ycombinator.com/item?id=26998308

MobiusHorizons•7mo ago
Thanks for elaborating, this was very instructive!
pornel•7mo ago
The tool has a great potential, but I always found it too limited, fiddly, or imprecise when I needed to optimize some code.

It only supports consecutive instructions in the innermost loops. It can't include nor even ignore any setup/teardown cost. This means I can't feed any function as-is (even a tiny one). I need to manually cut out the loop body.

It doesn't support branches at all. I know it's a very hard problem, but that's the problem I have. Quite often I'd like to compare branchless vs branchy versions of an algorithm. I have to manually remove branches that I think are predictable and hope that doesn't alter the analysis.

It's not designed to compare between different versions of code, so I need to manually rescale the metrics to compare them (different versions of the loop can be unrolled different number of times, or process different amount of elements per iteration, etc.).

Overall that's laborious, and doesn't work well when I want to tweak the high-level C or Rust code to get the best-optimizing version.

mshockwave•7mo ago
> This means I can't feed any function as-is (even a tiny one). I need to manually cut out the loop body.

> It doesn't support branches at all. I know it's a very hard problem, but that's the problem I have

Shameless self-plug: https://github.com/securesystemslab/LLVM-MCA-Daemon

fschutze•7mo ago
Can you provide a bit more context why the MCA-Daemon is preferred? Looks interesting, but I don't fully get it.
fossa1•7mo ago
This is a textbook case of micro-architectural reality beats theoretical elegance. It's fascinating how replacing 5 loads with 2 loads + 3 vextq_f32 intrinsics, which should reduce memory pressure, ends up being slower due to execution port contention and dependency chains.
almostgotcaught•7mo ago
> uses information available in LLVM (e.g. scheduling models) to statically measure the performance of machine code in a specific CPU

do people not realize that the scheduling models in LLVM are approximate? like really approximate sometimes. in fact, half the job of working on instruction scheduling in LLVM is cajoling the scheduler into doing the right thing given the approximate models.

Sesse__•7mo ago
My favorite was when the uiCA people found that a toy model (counting instructions and loads, then multiplying them by some simple constants) significantly outperformed llvm-mca on x86 :-)