Overall, I'd say give it a shot as it can be really powerful and I do actually like it. Don't be afraid to go 'no, I know how to do this better, myself' and turn it off though.
FYI: messing with irq bindings for per-cpu queues of nics has been a bug for at least 16 years depending on the nic. FYI: Intel launched the 82599 back in 2009.
Clueless software developers should not be messing with kernel settings like irq bindings. Software that does that is not worth my time.
> Clueless software developers should not be messing with kernel settings like irq bindings. Software that does that is not worth my time.
Come on, man. If you don’t want to help, just don’t respond. If you want to warn someone against something, just be bare-minimum polite. It’s easy.
Edit: Let me explain why I am of this opinion. Of late my life is being made miserable by poor quality software. There seems to be an entire generation of programmers that are skipping the whole part of the design process where one explores the problem space a given piece of software is meant to fit into. In doing so, they are willfully ignoring how the user will experience their software.
This includes networking products that have no means of recovery when the cloud credentials are lost. When the owner of the product loses their credentials and no longer has access to the email address they originally signed up for, the only solution is a manual reset of every single device in the network. Have you every had to spend hours taking a ladder into a building to rip down a dozen access points that are paperweights because there's no way to recover from this?
Take LLMs. They're great at filling in reams of boilerplate code where the structure is generally the same as everything else. So much of the software industry is about building CRUD apps for your favourite corporation, and there's not a lot of thinking throughout the process. But what happens when you're building a complex application that involves careful performance optimization on many core CPUs and numerous race conditions with complex failure modes? Not so good. And the person driving the LLM isn't going to patch the security holes in the "vibe code" they submitted to the Linux kernel because they don't even know how it works.
Or LLMs that skip off the guard rails and feed desperate individuals information on how to kill themselves?
What about the Full Self Driving vehicles that drive at full speed into emergency vehicles parked on a road with lights flashing that the most naive of drivers would instinctively slow down for while approaching?
What about search engines that have prematurely deployed "AI" features that hallucinate search results for straightforward queries?
How about the world's largest e-commerce website that can't perform a simple keyword search for an attribute of a product (like the size of an SSD)? When I specify 8TB, I mean products that are 8TB, not 512GB!!!
How about CPUs that lose 10-20% of their touted performance gains at launch because of bugs that are "fixed" by software and microcode updates after launch?
What about the email service that blocks emails that are virtually identical to every other email sent to a mailing list because it wasn't delivered using TLS? Oh, but the spammers that have SPF + DKIM + ARC + whatever validation get to have their messages delivered because they have put an Unsubscribe link in the headers.
How about the online advertising platforms that push scams on the elderly with ads that are ephemeral to prevent anyone from sharing a link to what they just saw and report it?
So if I say there is a problem with a software developer being clueless about features they have implemented, it is a valid criticism that is based in facts about the way their software was designed and how it functions.
There are still people who value their reputation enough to put in the effort to explore the problem space and anticipate the user's needs to avoid issues like this, but I fear that they are going to be pushed out of the industry because they're not fast enough in the race to foist the "next big thing" onto an unsuspecting public.
We need simple, reliable, functional software that meets the needs of its users. And we're losing that.
It's a sad state of affairs that we have to deal with in 2025. We have truly entered the age of "Fuck you" software that ignores what it does to its users and actively harms them.
The rest of the rant is valid and the issues are virtually impossible to get fixed.
Please enlighten me: how does one file a bug against spam filtering on gmail.com or get rid of broken AI summaries on google.com that will garner an acknowledgement and get the underlying issue fixed?
Another answer talks about saving 40W. Why not? But it's not much in a normal power-cost environment.
There was another issue I was able to fix with it in AWS, but I legitimately can't recall what it is.
x86 cpus don't have the power efficiency to do the work we now expect of them in thin and light laptops with difficult thermal constraints. You can push them one way or another. You can have them fast with a fan like a jet engine or you can have them cold and running like a 10 year old computer or put the dial somewhere inbetween but there is only so much you can do.
I have one of Intel's old desktop class processors in a refurbished ex-office mini-desktop plugged into a power meter running a few services for the household and the idle usage isn't terrible. I don't understand why my laptop doesn't run colder and longer given the years of development between them.
There is also the race to idle strategy where you spike the performance cores to get back to idle which probably works well with a lot of office usage but not so well with something more demanding like games or benchmarks.
And x86 not being power efficient is hardly true for the modern AMD mobile chips, it's not quite the "iPad" experience but it's very good. Comparing to Apple is unfair IMHO since M* is essentially an iPhone CPU with a souped up power budget, many years of optimization across both hardware, kernel and userspace that we don't have.
“It does not matter for this case” times 100 is how we get these power-hungry systems.
Python is a bad choice for high-throughput systems but not for reasons that make it power inefficient when used in a scripting capacity like tuned.
You think the Python interpreter just randomly executes stuff for shits and giggles? No but it does use more memory than something compiled to native.
Meanwhile: TLP is implemented in Bash.
Stop spreading FUD please it contributes negatively to the world.
Things like suspend to RAM/disk working, GPU performance is reasonable, WiFi and disk speeds aren't slower than expected.
If you're on and AMD laptop then suspend to ram can be tested with amd-debug-tools[0].
> WiFi
Here[1] is a list of public iperf3 servers. You can test your connection speed with (change host name and port to appropriate server):
# Test upload speed
iperf3 -c host-name-here -p 5201
# Test download speed
iperf3 -c host-name-here -p 5201 -R
You can also launch your own server so you're not limited by your internet speed (I usually run one on my router): iperf3 -s -p 5201
[0] https://git.kernel.org/pub/scm/linux/kernel/git/superm1/amd-debug-tools.git/about/
[1] https://iperf.fr/iperf-servers.phphttps://manpages.debian.org/testing/moreutils/sponge.1.en.ht... good for privilege escalated pipe to file :)
Although all of the commands and files are all-lowercase: tuned.
It is probably lucky for RedHat that it does not have similar pundits. (-:
jauntywundrkind•5mo ago
There's also API compatibility with the power-profiles-daemon, which didn't ever help me that much (I'd also done some basic tuning myself), and which has been unmaintained for a while now. But there's still a variety of utilities which target the old ppd.