It's not even a multiple CPU board...
This is indeed a pretty standard (and weak) ARM server build.
You can get the same CPU M128-30 with 128 3ghz cores for under $800 USD.
You can throw two into a Gigabyte MP72-HB0 and fit it into a full tower case easily.
That'd only cost like $3,200 USD for 256 cores.
RAM is cheap, and that board could take 16 DIMMs.
If you used 16 GB DIMM like OP that's only 256 GB of RAM, in a server, it is not that much... only one gig per core... for like $500 USD.
Maybe for a personal build this seems extravagant but it's nothing special for a server.
Not that much changed since this:
https://marcin.juszkiewicz.com.pl/2019/10/23/what-is-wrong-w...
It's top1 on everyone's HN because a sufficient number of people (including myself) thought it a nice writeup about fat ARM systems.
Server-side, I also bought used Xeons for an old box and recertified 10TB Exos. No issues there neither.
The HDDs are a bit of a gamble, but for anything else I can only encourage you to buy used!
> It helps that our local Cragslist offers efficient buyer protection.
What does this mean?2-3 years is not a lot. My daily driver laptop is from 2011 and still going strong.
Sure, there are “lemons” out there, but there are also a lot of people who just replace their hardware often.
I also only buy used phones (I don’t have high requirements) and as with laptops, batteries are the “weak link” - as you correctly point out.
A brand new battery for my laptop, can be had for ~30-65 USD though, and the battery is easy to replace (doesn’t even require screwdriver). I never use it untethered anymore though, so I don’t bother..
Non-polymer electrolytic capacitors can dry out, but just about all decent modern motherboards use polymer-based since years ago.
My current NAS is my previous desktop, which I bought in 2015. I tended to keep my desktop on 24/7 due to services, and my NAS as well, so it's been running more or less continuously since then. It's on its second PSU but apart from that chugging along.
I've been using older computer parts like this for a long time, and reliability increased markedly after they switched to non-polymer caps.
Modern higher-end GPUs due to their immense power requirements can have components fail, typically in the voltage regulation. Often this can be fixed relatively cheaply.
If buying a desktop I'd check that it works, it looks good inside (no dust bunnies etc), seller seems legit, and I'd throw a new PSU in there once I bought it.
If anyone knows of any, let me know!
That is just not true.
Nowadays, most OSS software and most server side software will run without any hinch on armv8.
A tremendous amount of work has been done to speed up common software on armv8, partially due to popularity of mobile as a platform but also and to the emergence of ARM servers (Graviton / Neoverses) in the major Cloud providers infrastructure.
Because those cloud offerings have handled for you the problematic case of ARM generally operating as "closed platform" even when everything is open source.
On a PC server, usually you only hit any issues if you want to play with something more exotic on either software or hardware. Bog-standard linux setup is trivial to integrate.
On ARM, even though finally there's UEFI available, I recall that even few years ago there were issues with things like interrupt controller support - and that kind of reputation persists and definitely makes it harder to percolate on-prem ARM.
It also does not help that you need to go for pretty pricy systems to avoid vendor lock-in at firmware compatibility level - or had to, until quite recently.
Mac is similarly an issue of proprietary system with no BMC support. Running one in a rack is always going to be at least partially half-baked solution. Additionally, you're heavily limited in OS support (for all that I love what Asahi has done, it does not mean you can install let's say RHEL on it, even in virtual machine - because M-series chips do not support 64kB page size which became the standard on ARM64 installs in the cloud, for example RHEL defaults to it and it was quite a pain to deal with in a company using Macbooks).
So you end up "shopping" for something that actually matches server hardware and it gets expensive and sometimes non-trivial, because ARM server market was (probably still is) not quite friendly to casually buying a rackmount server with ARM CPUs for affordable prices. Hyperscalers have completely different setups where they can easily tank the complexity costs because they can bother with customized hardware all the way to custom ASICs that provide management, I/O, and hypervisor boot and control path (like AWS Nitro).
One option is to find a VAR that actually sells ARM servers and not just appliances that happen to use ARM inside, but that's honestly a level of complexity (and pricing) above what many smaller companies want.
So if you're on a budget it's either cloud(-ish) solutions or maybe one your engineers can be spared to spend considerable amount of time to build a server from parts that will resemble something production quality.
https://www.theregister.com/2023/08/08/amazon_arm_servers/:
“Bernstein's report estimates that Graviton represented about 20 percent of AWS CPU instances by mid-2022“
And that’s three years ago. Graviton instances are cheaper than (more or less) equivalent x86 ones on AWS, so I think it’s a safe bet that number has gone up since.
I think the main use case for these is some sort of Android build farm, as a CI/CD pipeline with testing of different OS versions and general app building, since they don't have to emulate arm.
A) No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).
B) Ampere Altra runs faster for throughput than x86 on the same lithography and clock frequency; I can't imagine how they'd be slower for web, it's not my experience with these machines under test. Maybe virtualisation has issues (I ran bare-metal containers - as you should).
My original intent was to use these machines as build/test clusters for our go microservices (and I'd run ARM on GCP) but GCP was a bit too slow to roll out and now we're far into feature locking any migrations of that.
So I added the machines to the general pool of compute and they run bots, internal webservices etc; with Kubernetes.
The performance is extremely good, only limited by the fact we can't use them as build machines for the game due to the architecture difference - however for storage or heavy compute they really outperform the EPYC Milan machines which are also on a 7nm lithography.
Does qemu-user solve that, or are there special requirements due to JIT and the like that qemu-user can't support?
Their Ampere A1 free tier is pretty good. 4 core ARM and 24gb ram webserver for free.
There have been no consumer chips with Armv9.1-A, but only with Armv9.0-A and with Armv9.2-A. The only CPU with Armv8.6-A that has ever been announced publicly was the now obsolete Neoverse N2. Neoverse N2 has been skipped by Amazon and I do not know if any other major cloud vendor has used it.
So what you really search for are CPUs with Armv9.2-A (i.e. a superset of Armv8.6-A), i.e. with Cortex-A520 or Cortex-A720 or Cortex-X4 or Cortex-A725 or Cortex-X925.
There are many smartphones launched last year or this year with these CPU cores, but except for them the list of choices is short, i.e. either the very cheap Radxa Orion O6 (Cortex-A720 based), which is fine, but its software is immature, or a very expensive NVIDIA DGX development system (Cortex-X925 based; $4000 from NVIDIA or $3000 from ASUS), or one of the latest Apple computers, which support Armv8.7-A (which do not have SVE, but which have SME).
For the latest Qualcomm CPUs, I have no idea what ISA is supported by them, because Qualcomm always hides very deeply any technical information about their products.
If all you care about is the CPU, then a mid-range Android smartphone in the $400-$500 price range could be a better development system, especially if its USB Type C connector supports USB 3.0 and DisplayPort, like some Motorola Edge models, allowing you to use an external monitor and a docking station.
If you also care about testing together with some standard desktop/server peripherals, the mini-ITX motherboard of Radxa Orion O6 is more appropriate, but encountering bugs in some of its Linux device drivers is likely, which may slow down the development until they are handled.
I wonder where this requirement comes from ...
(as a starting point 4k is a "page size for ants" in 2025 - 4MB might be too much however)
But the bigger the page the less TLB entries you need, and less entries in your OS data structures managing memory, etc
It should also be possible to patch Linux itself to support different page sizes in different processes/address spaces, which it currently doesn't. It would be quite fiddly (which is why it hasn't happened so far) but it should be technically feasible.
IIRC ARM64 hardware also has some special support (compared to x86 and x86-64) for handling multiple-page "blocks" - that kind of bridges the gap between a smaller and larger page size, opening up further scenarios for better support on the OS side.
What surprises me more is why Red Hat doesn't provide them with the proper hardware..
burnt-resistor•5h ago
Q64-22 on eBay (US) for $150-200 USD / 542-723 PLN.
https://www.ebay.com/itm/365380821650
https://www.ebay.com/itm/365572689742
szszrk•4h ago
Also, CPU was hardly the biggest cost here.
Aissen•3h ago