https://www.phoronix.com/news/Linux-6.17-NUMA-Locality-Rando... https://www.phoronix.com/news/Linux-6.13-Sched_Ext https://www.phoronix.com/news/DAMON-Self-Tuned-Memory-Tierin... https://www.phoronix.com/news/Linux-6.14-FUSE
There's some big work I'm missing thats more recent too, again about allocating & scheduling IIRC. Still trying to find it. The third link is in DAMON, which is trying to do a lot to optimize; good thread to tug more on!
I have this pocket belief that eventually we might see post NUMA post coherency architectures, where even a single chip acts more like multiple independent clusters, that use something more like networking (CXL or UltraEthernet or something) to allow RDMA, but without coherency.
Even today, the title here is woefully under-describing the problem. A Epyc chip is actually multiple different compute die, each with their own NUMA zone and their own L3 and other caches. For now yes each socket's memory is all via a single IO die & semi uniform, but whether that holds is in question, and even today, the multiple NUMA zones on one socket already require careful tuning for efficient workload processing.
stego-tech•25m ago
One thing the writeup didn’t seem to get into is the lack of scalability of this approach (manual pinning). As core counts and chiplets continue to explode, we still need better ways of scaling manual pinning or building more NUMA-aware OSes/applications that can auto-schedule with minimal penalties. Don’t get me wrong, it’s a lot better than ye olden days of dual core, multi-socket servers and stern warnings against fussing with NUMA schedulers from vendors if you wanted to preserve basic functionality, but it’s not a solved problem just yet.
jasonjayr•22m ago
EDIT: aaaand ... I commented before reading the article, which describes this very mechanism.
colechristensen•20m ago
Most of us are in the realm of the lowest hanging fruit being database queries that could be 100x faster and functions being called a million times a day that only need to be called twice.
stego-tech•9m ago
In 99% of use cases, there’s other, easier optimizations to be had. You’ll know if you’re in the 1% workload pinning is advantageous to.
For everyone else, it’s an excellent explainer why most guides and documentation will sternly warn you against fussing with the NUMA scheduler.