The article name is somewhat misleading, since it makes it sound like this would also apply to desktop workloads. The article says it is for datacenters and that is true, but it would have been better had the title ended with the words “in datacenters” to avoid confusion.
(I typed this on Linux PC).
This is specifically a change to the Linux kernel, which is much, much more broadly successful.
I confess I'm dubious on major savings for most home users, though? At least, at an absolute level. 30% of less than five percent is still not that big of a deal. No reason not to do it, but don't expect to really see the results there.
If you’re doing custom routing with a NUC or a basic Linux box, however, this would gain massive power savings because that box pretty much only does networking.
It most likely won't. This patch set only affects applications that enable epoll busy poll using the EPIOCSPARAMS ioctl. It's a very specialized option that's not commonly used by applications. Furthermore, network routing in Linux happens in the kernel, not in user space, so this patch set doesn't apply to it at all.
Now, NAPI already was supposed to have some adaptiveness involved, so I guess it's possibly a matter of optimizing it.
But my system is compiling for now so will look at article more in depth later :V
I much prefer grassroots projects. Made by and for people like me <3 That's why I moved to BSD (well there were other reasons too of course)
For me, it feels like a moral imperative to make my code as efficient as possible, especially when a job will take months to run on hundreds of CPU.
For the completely uninitiated, taking the most critical code paths uncovered via profiling and asking an LLM to rewrite it to be more efficient might give an average user some help in optimization. If your code takes more than a few minutes to run, you definitely should invest in learning how to profile, common optimizations, hardware latencies and bandwidths, etc.
With most everything I use at the consumer level these days, you can just feel the excessive memory allocations and network latency oozing out of it, signaling the inexperience or lack of effort of the developers.
It is unfortunate that many software engineers continue to dismiss this as "premature optimization".
But as soon as I see resources or server costs gradually rising every month (even on idle usage) costing into the tens of thousands which is a common occurrence as the system scale, then it becomes unacceptable to ignore.
I was working with a peer on a click handler for a web button. The code ran in 5-10ms. You have nearly 200ms budget before a user notices sluggishness. My peer "optimized" the 10ms click handler to the point of absolute illegibility. It was doubtful the new implementation was faster.
Also, would you share all new found efficiencies with your competitors?
I personally believe the majority is wasted. Any code that runs in an interpreted language, JIT/AOT or not, is at a significant disadvantage. On performance measurements it's as bad as 2x to 60x worse than the performance of the equivalent optimized compiled code.
> it feels like a moral imperative to make my code as efficient as possible
Although we're still talking about fractions of a Watt of power here.
> especially when a job will take months to run on hundreds of CPU.
To the extent that I would say _only_ in these cases are the optimizations even worth considering.
The power efficiency seems to be limited to "network applications using epoll".
The 30% the article talks about seems to be benchmarked on memcached, and here is the ~30 lines diff they're probably talking about: https://raw.githubusercontent.com/martinkarsten/irqsuspend/m...
Typically saves 2-3 microseconds going through the kernel network stack.
The "up to 30%" figure is operative when you have a near-idle application that's busy polling, which is already dumb. There are several ways to save energy in that case.
That was my first thought, but it sounds like the OS kernel, not the application, has control over the polling behavior, right?
I'm not au fait with network data centres though, how similar are they in terms of their demands?
This is the sort of performance efficiencies I want to keep seeing on this site, from those who are distinguished experts and contributed to critical systems such as the Linux kernel.
Unfortunately, in the last 10-15 years we are seeing the worst technologies being paraded due to a cargo-cultish behaviour. From asking candidates to implement the most efficient solution to a problem in interviews but then also choosing the most extremely inefficient technologies to solve certain problems because so-called software shops are racing for that VC money. Money that goes to hundreds of k8s instances on many over-provisioned servers instead of a few.
Performance efficiency critically matters, and it is the difference between having enough runway for a sustainable business vs having none at all.
And nope. AI Agents / Vibe coders could not have come up with a more correct solution in the article.
The issue I was trying to resolve was sudden, dramatic changes in traffic. Think: a loop being introduced in the switching, and the associated packet storm. In that case, interrupts could start coming in so fast that the system couldn't get enough non-interrupted time to disable the interrupts, UNLESS you have more CPUs than busy networking interfaces. So my solution then was to make sure that the Linux routers had more cores than network interfaces.
https://didgets.substack.com/p/finding-and-fixing-a-billion-...
“It will take one year,” said the master promptly.
“But we need this system immediately or even sooner! How long will it take if I assign ten programmers to it?”
The master programmer frowned. “In that case, it will take two years.”
“And what if I assign a hundred programmers to it?”
The master programmer shrugged. “Then the design will never be completed,” he said.
— Chapter 3.4 of The Tao of Programming, by Geoffrey James (1987)
Instead, they use DPDK, XDP, or userspace stacks like Onload or VMA—often with SmartNICs doing hardware offload. In those cases, this patch wouldn’t apply, since packet processing happens entirely outside the kernel.
That doesn’t mean the patch isn’t valuable—it clearly helps in setups where the kernel is in the datapath (e.g., CDNs, ingress nodes, VMs, embedded Linux systems). But it probably won’t move the needle for workloads that already bypass the kernel for performance or latency reasons. So the 30% power reduction headline is likely very context-dependent.
d3Xt3r•12h ago
This is basically 3 month old news now.
ryao•7h ago
https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds...
If it is not reported for people to know about it, this will do nothing.
That said, the opt-in here is only relevant when software has already opted into busy polling, so it likely does not apply to software used outside of datacenters.
Almondsetat•6h ago
d3Xt3r•11m ago
TheRealPomax•6h ago
matkoniecz•6h ago
Similarly, I prefer old books, old computer games, old movies (or at least not ones currently being hot/viral/advertised), this allows a lot of trash to self-filter out. Including trash being breathlessly promoted and consumed.
d3Xt3r•7m ago
6.13 is old news now, we're already on 6.14, and even 6.15 isn't far from release (we're already at rc3).
throw-qqqqq•4h ago
I may notice changes when they get adopted by upstream maintainers of my distro, but that usually takes time..
d3Xt3r•56s ago
tverbeure•4h ago