But, but with fewer processes I can totally believe this works out to be the better option. Thank you for the write-up!
In our case, though, if we provide 1/48 of 10Gbit network, it really doesn't work for our end customers. So, we're trying to provide the VMs from smaller but more up-to-date lineup.
It's always the workload type. For the mixed environments (some with a heavy constant load while the other have only some occasional spikes) the increase of RAM per node is the most important part what allowed us to actually decrease the node count. The whole racks with multiple switches was replaced by a single rack with a modest amount of servers and a single stacked switch.
It is surprisingly hard to keep a modern CPU core properly saturated unless you are baking global illumination, searching primes or mining crypto currency. I/O and latency will almost always dominate at scale. Moving information is way more expensive than processing it.
Transputers just came 30+ years too early.
But I figure it is a broad field, so I'm curious what you're doing and if it is the best use of time and energy
I'm also assuming that the generative AI model wouldn't run on your machine well and need to be elsewhere
The larger server grade parts start to shine when the server is doing a lot of different things. The extra memory bandwidth helps keep the CPU fed and the higher core count reduces the need for context switching because your workloads aren’t competing as much.
The best part about the AMD consumer CPUs is that you can even use ECC RAM if you get the right motherboard.
https://www.asrock.com/mb/AMD/X870%20Taichi%20Creator/index....
Asus has options as well such as https://www.asus.com/motherboards-components/motherboards/pr...
I think it was more rare when AM5 first came out, there were a bunch of ECC supported consumer boards for AM4 and threadripper.
I use a 5950X for running genetic programming and neuroevolution experiments and about once every 100 hours the machine will just not like the state/load it is experiencing and will restart. My approach is to checkpoint as often as possible. I restart the program the next morning and it deserializes the last snapshot from disk. Worst case, I lose 5 minutes of work.
This also helps with Windows updates, power outages, and EM/cosmic radiation.
If you really care, you can buy an Epyc branded AM4 or AM5 cpu which has remarkably similar specifications and MSRP to Ryzen offerings.
Although said somewhat tongue in cheek, it has been a rough several years for tech hobbyist consumers. At least the end of Moore's law scaling and the bite of Dennard scaling combined to nerf generational improvements enough that getting by on existing hardware wasn't as nearly as bad as it would've been 20 yrs ago.
Now that maybe the AI bubble is just starting to burst, we've got tariffs to ensure tech consumers still won't see undistorted prices. The silver lining in all this is that it got me into retro gaming and computing which, frankly, is really great.
It's unfortunate that we can only have 16 core CPUs running at 5+ GHz. I would have loved to have a 32 or 64 core Ryzen 9. The software we use charge per core used, so 30% less performance is that much extra cost, which is easily an order of magnitude higher than a flagship server CPU. These licenses cost millions per year for couple 16 core seats.
So, at the end, CPU speed is determining how fast abd economically chips are developed.
The ROI on hiring a professional overclocker to build, tune and test a workstation is probably at least break even. As long as the right checksums are in place, extreme OC is just a business writeoff.
magicalhippo•5h ago
That said, at such low core count the primary Epyc advantage is PCIe lanes no?
furkansahin•5h ago
Also, EPYC's PCIe advantage doesn't hold for the Hetzner provided server setup unfortunately because the configurator allows the same number of devices to be attached to both servers.