- Visit https://www.cpubenchmark.net/single-thread/ and pick the fastest CPU under $400
- Visit https://www.cpubenchmark.net/multithread/ and verify there are no CPUs at a lower cost with a higher score
It has been, for a long time, the latest generation Intel CPU with a 2xxK or 2xxKF model number these used to be "i7" models now there's just a 7, I'm very vaguely annoyed at the branding change.
It would be hard for anybody to convince me that there is a better price|performance optimum. I get it, there was a very disappointing generation or two a few years ago, that hasn't put me off.
The dominance of Apple CPUs might be putting me off both Intel and AMD and consider only buying Apple hardware and maybe even doing something like Linux running on a Mac Mini in addition to my MacOS daily driver.
- generic benchmarks don’t pick up unique CPU features nor they pick up real world application performance. For example, Intel has no answer to the X3D V-cache architecture that makes AMD chips better for gaming.
- You can’t really ignore motherboard cost and the frequency of platform socket changes. AMD has cheaper boards that last longer (as in, they update their sockets less often so you can upgrade chips more and keep your same board)
- $400 is an arbitrary price ceiling and you’re not looking at dollars per performance unit, you’re just cutting off with a maximum price.
- In other words, Intel chips are below $400 because they aren’t fast enough to be worth paying $400+ for.
- If you’re looking for integrated graphics, you’re pretty much always better off with AMD over Intel
That market is like 90% gamers at least.
3D v-cache is a key feature for that audience. It makes gaming performance significantly better.
My choice of CPU currently has the best value / performance on this benchmark aside from two very old AMD processors which are very slow and just happen to be extremely cheap. No new AMD processors are even remotely close.
It's also currently $285 no top tier performers are even close except SKUs which are slight variations of the same CPU.
https://www.cpubenchmark.net/cpu_value_available.html
> benchmarks don’t pick up unique CPU features nor they pick up real world application performance. For example, Intel has no answer to the X3D V-cache architecture that makes AMD chips better for gaming.
Happy to be convinced that there's a better benchmark out there, but if you're trying to tell me it's better but in a way that can't be measured, I don't believe you because that's just "bro science".
> If you’re looking for integrated graphics, you’re pretty much always better off with AMD over Intel
I never have been looking for integrated graphics, sometimes I have bought the CPU with it just because it was a little cheaper.
> You can’t really ignore motherboard cost and the frequency of platform socket changes. AMD has cheaper boards that last longer (as in, they update their sockets less often so you can upgrade chips more and keep your same board)
I've always bought a new motherboard with a CPU and either repurposed, sold, or given away the old CPU/motherboard combination which seems like a much better use of money. The last one went to somebody's little brother. The one before that is my NAS. There's not a meaningful difference to comparable motherboards to me, particularly when the competing AMD CPUs are nearly double the cost or more.
Your example of tossing your motherboard away is not a very good one here. That was your choice to act illogically. My AMD AM4 motherboard started with a Ryzen 1600, 3600, and now runs a 5600X3D.
Basically I’ve had this same motherboard for something like 6 or 7 years and the performance difference between a Ryzen 1600 and 5600X3D is completely wild. I’ve had no need to buy a new board for the better part of a decade. If you’re buying a new board with every processor purchase that’s a huge cost difference.
When I say that generic benchmarks are bad I mean that cpu benchmarks like the one you are just now linking are bad. You need more practical benchmarks like in-game FPS, how long a turn takes in Stellaris, how long it takes to encode a video or open an ZIP file, etc.
That is where the X3D chips play in as well. You might be able to buy an Intel chip with more cores and better productivity performance, but if you’re eyeing gaming performance like I imagine most desktop DIY builders are, you’d rather get better gaming oriented performance and sacrifice some productivity performance.
If you are gaming and buy a 9800X3D, Intel literally doesn’t not make anything faster at any price. You can offer Intel $5,000 and they won’t have anything to sell you that goes faster at playing games.
At lower price points, AMD still ends up making a lot of sense for their long-supported sockets, low cost boards, better power/heat efficiency, and X3D chips performing well in gaming applications.
So, it should be visible in gaming benchmarks, right?
>- If you’re looking for integrated graphics, you’re pretty much always better off with AMD over Intel
What? Lunar Lake CPU has strong iGPU
What you will tangibly miss is low noise, low power draw hardware and very, very specific workloads being faster than the cutting edge AMD/Nvidia stack people are using today.
However, the reason my personal laptop is a Framework 13 and not a MacBook Pro is because I value upgradability and user-servicability. My Framework has 32GB of RAM, and I could upgrade it to 64GB at a later date. Its SSD, currently 1TB, is also upgradable. I miss the days of my 2006 Core Duo MacBook, which had user-serviceable RAM and storage. My Ryzen 9 3900 replaced a 2013 Mac Pro.
Additionally, macOS doesn't spark the same type of joy that it used to; I used to use Macs as my personal daily drivers from 2006 to 2022. While macOS is less annoying than Windows to me, and while I love some of the bundled apps like Preview.app and Dictionary.app, the annoyances have grown over the years, such as needing to click a security prompt each time I run lldb on a freshly-compiled program. I also do not like the UI directions that macOS has been taking during the Tim Cook era; I didn't like the changes made during Yosemite (though I was able to live with them) and I don't plan to upgrade from Sequoia to Tahoe until I have to for security reasons.
Apple's ARM hardware is appealing enough to me that I'd love to purchase a M4 Mac Mini to have a powerful, inexpensive, low-power ARM device to play with. It would be a great Linux or FreeBSD system, except due to the hardware being undocumented, the only OS that can run on the M4 Mac Mini for now is macOS. It's a shame; Apple could probably sell more Macs if they at least documented enough to make it easier for developers of alternative operating systems to write drivers for them.
Meanwhile, my Apple Mac Mini 2012 (Intel CPU) - which needed extraordinary efforts by me to make it triple boot MacOs, Windows 10 and Linux (trust Apple to make it hard to install other OSes on an Intel CPU PC) - is slow and fussy because of its meagre RAM and old HDD (not SSD). But the Apple service center refused to upgrade this Mac Mini to new RAM and new SSD, citing Apple policies to not allow such upgrades. Apple has made it quite hard to custom upgrade such iDevices, so this little PC is lying unused in my cupboard, waiting for the rainy day when I'll get the courage to tinker it by myself to upgrade it. And even if I did upgrade the hardware, this Mac Mini can only be upgraded to MacOS Catalina, and it won't get security upgrades, because Apple has stopped supporting it.
P.S.: I hate Apple.
Your comment mostly makes sense but this is a weird mention when Windows is even worse on this now, Win11 not supporting much more recent machines.
FYI www.cpubenchmark.com is a running joke for how bad it is. It’s not a good resource.
There are a few variations of these sites like userbenchmark that have been primarily built for SEO spam and capturing Google visitors who don’t know where to go for good buying advice.
Buying a CPU isn’t really that complicated. For gaming it’s easy to find gaming benchmarks or buyers guides. For productivity you can check Phoronix or even the GeekBench details in the compiler section if that’s what you’re doing.
Most people can skip that and just read any buyers guide. There aren’t that many CPU models to choose from on the Pareto front of price and performance.
I guess the reason people prefer something like cpubenchmark, is because it seems way easier to get an overview / see data in aggregate. GeekBench (https://browser.geekbench.com/v6/cpu/multicore) for example just puts a list of all benchmarks, even when the CPU is the same. Not exactly conductive for finding the right CPU.
That's not the prevailing opinion at all. Passmark is just fine and does a lot to keep their data solid like taking extra steps to filter overclocked CPUs. Then you go on to recommend GeekBench??? Right...
You might be confusing them with UserBenchmark.
There are also many criticisms against CPUbennchmark that are much more minor like its over simplified testing leading to weird anomalous score gaps between extremely similar CPU’s.
For the average consumer, I think cpu benchmark is fine and probably as good as you can ask for without getting into the weeds which defeats the purpose really.
Is it that much better? Show me.
I mean just don't say anything if what you're trying to add is just "go look it up yourself"
https://openbenchmarking.org/test/pts/build-linux-kernel-1.1...
There are lots more tests in the sidebar.
If they've wrote SIMD code themselves then the gap between the two shouldn't be big (AMD's are actually better for SIMD nowadays, since the recent models support the AVX-512 instruction set while Intel ended support for that due to the P/E core split fiasco.)
That’s a bit niche though. But for a NAS is great.
https://www.digitec.ch/en/page/intel-has-a-big-problem-unsta...
If so, that’s a hell of a way for Intel to secure its future.
Their best engineers are probably still going to be in Taiwan, but with the rate at which TSMC is building fabs overseas, it shouldn't matter much.
But there's not a huge market (and therefore, FOSS dev time spent on it) for emulating AArch64 on x86 the way there is x86 on AArch64, so if your options are to build your own AArch64 emulation for x86 (or drop a fortune into an existing FOSS option), or building something based on AArch64 and using the existing x86 emulation implementations, one of these has much more predictable costs and outcomes today.
If an ARM device both suits the goals and has lower risk, there's little upside other than forcing the project to exist.
And since there's very few pieces of AArch64-exclusive software that Valve is trying to support, that's not a goal that benefits the project.
(If I were guessing without doing much research, Switch emulators might be the largest investment of effort in open source on x86 systems running AArch64 things performantly, but that's certainly not a market segment Valve is targeting, so...)
They didn't need to back a bunch of projects they backed (radv / aco are a big example where you could claim there was redundancy so they weren't strictly necessary), but results paid off even to the point where AMD is dropping their amdvlk in favor of using radv/aco.
They didn't need to strictly speaking back zink either (OpenGL on Vulkan), but they decided to and it gives them a long term ability to have OpenGL even if native drivers will be gone, as well as an upside of bringing up OpenGL on any Vulkan available target out of the box.
It's just something in their style to do when there seems to be an obvious gap.
I'm not claiming they're not beneficial or acting in bad faith, but their past contributions are "making enormous improvements in the ecosystem when they think this is the best option for them" - things like radv+aco you could see as "we looked at AMD's solution for this and decided it was lower risk to build a better one ourselves" - and I can't really argue with their logic, even if AMD hadn't given up and gone for it, they have a history of switching solutions every so often and then god help you if you built around the old one.
Zink makes sense because it closes a unique gap (supporting older games if OpenGL support sunsets) in their platform, not just for its own sake.
AArch64 emulation on x86 is useful, but doesn't fill such a gap for them, IMO. Nothing is uniquely targeting AArch64 in the library of things they want to support on Steam platforms except non-game software targeting only macOS/AArch64.
I would predict, since they seem to be adding some level of Steam game support for handling Android games [1] [2], that you might see them financing such a thing in the next couple years if there's enough things exclusively targeting Android or AArch64 that they want to de-risk people complaining they can no longer play them in 5-10 years.
[1] - https://www.gamingonlinux.com/2025/11/steamworks-sdk-adds-su...
[2] - https://www.theverge.com/news/818672/valve-android-apps-stea...
It's not such an immediately cool thing as radv or zink, but it's still useful.
The original smart phones like the Nokia Communicator 9110i were x86 based.
AMD previously had very impressive low-power CPUs, like the Geode, running under 1-watt.
Intel took another run at it with Atom, and were able to manage x86 phones (eg: Asus Zenphone) slightly better than contemporary ARM based devices, but the price for their silicon was quite a bit higher than ARM competitors. And Intel had to sink so much money into Atom, in an attempt to dominate the phone/tablet market, that they couldn't be happy just eeking out a small sliver of the market by only being slightly better at a significantly premium price.
You need a modem if you want to make a smartphone. And Qualcomm makes sure to, first, make some parts of the modem a part of their SoC, and second, never give a better deal on a standalone modem than on a modem and SoC combo.
Sure, AMD could make their own modem, but it took Apple ages to develop a modem in-house. And AMD could partner with someone like Mediatek and use their hardware - but, again, that would require Mediatek to prop up their competition in SoC space, so, don't expect good deals.
I would prefer them to start with WiFi though, since Intel made their latest chips impossible to use with AMD CPUs.
That didn't work out well when Intel tried it.
https://www.extremetech.com/mobile/302822-intel-blames-qualc...
I agree though that Qualcomm is causing a lot of anti-competitive problems.
Just price, I'd say.
I don't think it is price. Intel has had a bigger R&D budget for CPU designs than Apple. If you mean manufacturing price, I also doubt this since AMD and Intel chips are often physically bigger than Apple chips in die size but still slower and less efficient. See M4 Pro vs AMD's Strix Halo as an example where Apple's chip is smaller, faster, more efficient.Apple's CPU cores have been typically significantly bigger than any other CPU cores made with the same manufacturing process. This did not matter for Apple, because they do not sell them to others and because they have always used denser CMOS processes than the others.
Apple's CPUs have much better energy efficiency than any others when running a single-threaded application. This is due to having a much higher IPC, e.g. up to 50% higher, and a correspondingly lower clock frequency.
On the other hand, the energy-efficiency when running multithreaded applications has always been very close to Intel/AMD, the differences being explained by Apple having earlier access to the up-to-date manufacturing processes.
Besides efficiency in single-threaded applications, the other point where Apple wins in efficiency is in the total system efficiency, because the Apple devices typically have lower idle power consumption than the competition, due to the integrated system design and the use of high-quality components, e.g. efficient displays. This better total system efficiency is what leads to longer battery lifetimes, not a better CPU efficiency.
The Apple CPUs are fast for the kind of applications needed by most home users, but for applications that have greater demands for computational performance, e.g. with big numbers or with array operations, they are inferior to the AMD/Intel CPUs with AVX-512.
Where is your source?
There's plenty of die shots showing that Apple P cores are either smaller or around the same size as AMD and Intel P cores. Plenty of people on Reddit have done the analysis as well.
That's why monopoly is a bad thing. It allows manufacturer to pressure the vendor. The reason amd don't sells much in laptop market is extremely simple. Because you can't buy it. These vendors usually have far less version of amd laptop than intel one. And those are usually sells out quickly.
Every time I've tried it in the last 10 years, it's felt like I was teleported into the late 90s PC era - weird bugs in specific drivers that you can find lots of reports of for this specific model and no resolution, heat management that feels like someone in a basement strung things together in 5 minutes and never tested it again, strange failures in "plug and play" support for USB devices that work on every other machine flawlessly with the same cable and device, and don't get me started on Bluetooth. (My favorite ever might be the time that attempting to pair a specific pair of headphones to the laptop shut off every USB port, reproducibly, apparently because the BT adapter was connected over the USB M.2 pins to the root hub, and was crashing in firmware, so both Windows and Linux did the same style of dance of "try to reset it, once that fails, go up a level and turn off the complex to make sure other things keep working"...except up one level was the root.) (Though, to be fair to AMD, that was an Intel BT/wifi chip in an AMD laptop...)
I really want to like and recommend AMD mobile hardware, but every time I've tried it has been a shitshow without fail.
I'm very impressed though. I had no idea there were near 1/3 of the desktop market. Good for them.
AMD, I heard, seemed less capable, or less interested, or couldn't justify at their quantities, to do the same, which meant their engineering support packages were good for atx mainboards only, and maybe the occasional console.
This must have changed a while ago, does anyone have the tea?
To me they seem to be dominating the console scene, doing the CPU and GPU for all consoles from the last two generations, except for Switch and Wii U.
From the recent experience that I buy AMD mini-pc. (minisforun AI HX370) I don't feel it exist. (Because there is no need to) You just plug it into power socket and than it works. (Which is a good thing)
I'm serious. Doesn't seem like those are useful at anything short of high end server market. And even there the benefits are questionable.
Like you have to replace OneAPI, which sounds easy because it’s just one thing, but like do you really want to replace BLAS, LAPACK, MPI, ifort/icc… and then you still need to find a sparse matrix solver…
snovymgodym•2mo ago
cmovq•2mo ago
jauntywundrkind•2mo ago
They still have great laptop & desktop parts, in fact they're essentially the same parts as servers (with less Core Complex Die (CCD) chiplets and simpler IO Die)! Their embedded chips, mobile chips are all the same chiplets too!!
And there's some APU parts that are more consumer focused, which have been quite solid. And now Strix Halo, which were it not for DDR5 prices shooting to the moon, would be incredible prosumer APU.
Where AMD is just totally missing is low end. There's nothing like the Intel N100/N97/N150, which is a super ragingly popular chip for consumer appliances like NAS. I'm hoping their Sound Wave design is real, materializes, offers something a bit more affordable than their usual.
The news at the end of October was that their new low end line up is going to be old Zen2 & Zen3 chips. That's mostly fine, still an amazing chip, just not quite as fast & efficient. But not a lot no small AMD parts. https://wccftech.com/amd-prepares-rebadged-zen-2-ryzen-10-an...
It's crazy how AMD has innovated by building far far less designs than the past. There's not a bunch of different chips designed for different price points, the whole range across all markets (for cpus) is the same core, the same ~3 designs, variously built out.
I do wish AMD would have a better low end story. The Steam Deck is such a killer machine and no one else can make anything with such a clear value, because no one else can buy a bunch of slightly weird old chips for cheap, have to buy much more expensive mainline chips. I really wish there were some smaller interesting APUs available.
init2null•2mo ago
jauntywundrkind•2mo ago
Newest RDNA4 fixes a pretty weak encoder performance for game streaming, is competitive. Unfortunately (at release at least) av1 is still pretty weak. https://youtu.be/kkf7q4L5xl8
One thing noted is AMD seems to have really good output at lower bandwidth (~4min mark). Would be nice to have even deeper dives into this. And also whether or not the quality changes over time with driver updates would be curious to know. One of the comments details how already a bunch of the asks in this video (split frame encoding, improved av1) landed 1mo after the video. Hopefully progress continues for rdna4! https://youtube.com/watch?v=kkf7q4L5xl8&lc=UgzYN-iSC7N097XZi...
iknowstuff•2mo ago
My 3080 sffpc eats 70W idle and 400W under load.
Game performance is roughly the same from a normie point of view.
rubatuga•2mo ago
p_l•2mo ago
The OS can just leave BT on and still get interrupt and service it.
zackify•2mo ago
overfeed•2mo ago
AMD bet the farm on the chiplet architecture, and their risky bet has paid off in a big way. Intel's fortunately timed stumbling helped, but AMD ultimately made the right call about core-scaling at a time when most games and software titles were not written to take advantage of multicore parallelism. IMO, AMD deserves much more than the 25% marketshare, as Zen chips deliver amazing value.
toast0•2mo ago
Depends on where in embedded, but the laptop and APU chips are monolithic, not chiplet based.
embedding-shape•2mo ago
MBCook•2mo ago