You may be thinking of IBM i (formerly known as AS/400 and i5), which has a completely abstracted instruction set that on modern systems is internally recompiled to Power.
The Talos II:
https://wiki.raptorcs.com/wiki/Talos_II
> EATX form factor > Two POWER9-compatible CPU sockets accepting 4-/8-/18- or 22-core Sforza CPUs
"Entry" level is $5,800 USD.
There won't be a POWER10 version from them because of proprietary bits required
https://www.talospace.com/2023/10/the-next-raptor-openpower-...
> POWER10, however, contained closed firmware for the off-chip OMI DRAM bridge and on-chip PPE I/O processor, which meant that the principled team at Raptor resolutely said no to building POWER10 workstations, even though they wanted to.
https://www.osnews.com/story/137555/ibm-hints-at-power11-hop...
They aren't cheap and they aren't for everyone. But it meets my needs and it puts my money where my mouth is.
Back in the 90s and early 2000s, there were several non-x86 architectures that were more powerful, and even 64 bit long before Intel ever did. The DEC alpha, SPARC, and others. I was also too poor to afford those back then but I remember them fondly.
The microarch is closed and IBM-specific. However, the ISA is open and royalty-free, and the on-chip firmware is open source and you can build it yourself. In this sense it's at least as open as, say, many RISC-V implementations.
> Verilog RTL for OpenSPARC T2 design
> Verification environment for OpenSPARC T2
> Diagnostics tests for OpenSPARC T2
> Scripts and Sun internal tools needed to simulate the design and to do synthesis of the design
> Open source tools needed to simulate the design
https://www.oracle.com/servers/technologies/opensparc-t2-pag...
Could they just list prices? Sure. Will they ever do it? No.
Though the sky is the limit. The typical machine I would order had a list price of about 1 million. Of course no one pays list. Discounts can be pretty substantial depending on how much business you do with IBM or how bad they want to get your business.
But IBM _does_ have their own mainframe emulator, zPDT (z Personal Development Tool), sold to their customers for dev and testing (under the name zD&T -- z Development and Test), and to ISVs under their ISV program. That's what IBM's own developers would be using if they're doing stuff under emulation instead of LPARs on real hardware.
(And IBM's emulator is significantly faster than Hercules, FWIW, but overall less feature-full and lacks all of the support Hercules has for older architectures, more device types, etc.)
It's easier and harder at the same time to buy older hardware. That's half the challenge though because the software is strictly licensed and you pay per MIPS.
Here's a kid who bought a mainframe and then brought it up:
Previous generation machines that came off-lease used to be listed on IBM's web site. You could have a fully-maxed-out previous-generation machine for under $250k. Fifteen years ago I was able to get ballpark pricing for a fully-maxed-out new machine, and it was "over a million, but less than two million, and closer to the low end". That being said, the machines are often leased.
If you go with z/vm or z/vse, the OS and softare is typically sold under terms that are pretty much like normal software, except it varies depending on the capacity level of the machine, which may be less than the physical number of CPUs in the machine, since that is a thing in IBM-land.
If you go for z/os, welcome to the world of metered billing. You're looking at tens of thousands of dollars in MRC just to get started, and if you're running the exact wrong mix of everything, you'll be spending millions just on software each month. There's a whole industry that revolves around managing these expenses. Still less complicated than The Cloud.
Having started in 8-bit microcomputers and progressing to various desktop platforms and servers, mainframes were esoteric hulking beasts that were fascinating but remained mysterious to me. In recent years I've started expanding my appreciation of classic mainframes and minis through reading blogs and retro history. This IEEE retrospective on the creation of the IBM 360 was eye-opening. https://spectrum.ieee.org/building-the-system360-mainframe-n...
Having read pretty deeply on the evolution of early computers from the ENIAC era through Whirlwind, CDC, early Cray and DEC, I was familiar with the broad strokes but I never fully appreciated how much the IBM 360 was a major step change in both software and hardware architecture. It's also a dramatic story because it's rare for a decades-old company as successful and massive as IBM to take such a huge "bet the company" risk. The sheer size and breadth of the 360 effort as well as its long-term success profoundly impacted the future of computing. It's interesting seeing how architectural concepts from the 360 (as well as DEC's popular PDP-8, 10 and 11) went on to influence the design of early CPUs and microcomputers. The engineers and hobbyists creating early micros had learned computers in the late 60s and early 70s mostly on the 360s and PDPs which were ubiquitous in universities.
https://direct.mit.edu/books/monograph/4262/IBM-s-360-and-Ea...
After reading the IEEE article I linked above, I got the book the article was based on ("IBM: The Rise and Fall and Reinvention of a Global Icon"). While it's a thorough recounting of IBM's storied history, it wasn't what I was looking for. The author specifically says his focus was not the technical details as he felt too much had been written from that perspective. Instead that book was a more traditional historian's analysis which I found kind of dry.
There are several drawbacks to maintaining this kind of compatibility but, nevertheless, it's impressive.
I find mainframes fascinating, but I'm so unfamiliar with them that I don't know what or why I'd ever use one for (as opposed to "traditional" hardware or cloud services).
It seems clear to me that prior to robust systems for orchestrating across multiple servers that you would install a mainframe to handle massive transactional workloads.
What I can never seem to wrap my head around is if there are still applications out there in typical business settings where a massive machine like this is still a technical requirement of applications/processes or if it's just because the costs of switching are monumental.
I'd love to understand as well!
Large institutions (corporations, governments) that have existed more than a couple decades, and have large-scale mission-critical batch processes that run on them, where the core function is relatively consistent over time. Very few, if any, new processes are automated on mainframes most of these places, and even new requirements for the processes that depend on the mainframe may be built in other systems that process data before or after the mainframe workflows, but the cost and risk of replacing the well-known, finely-tuned-by-years of ironing out misbehavior, battle-tested systems often isn't warranted without some large scale requirements change that invalidates the basic premises of the system. So, they stay around.
It feels like I must be missing something, or maybe just underestimating how much money is involved in this legacy business.
According to a 2024 Forrester Research report, mainframe use is as large as it's ever been, and expected to continue to grow.
Reasons include (not from the report) hardware reliability and scalability, and the ability to churn through crypto-style math in a fraction of the time/cost of cloud computing.
Report is paywalled, but someone on HN might have a free link.
Do you have a credit card? Do you bank in the USA? If you answered "yes" to either of the above questions, you interact indirectly with a mainframe.
Edit: Oh yeah, just saw MasterCard has some job posting for IBM Mainframe/COBOL positions. Fascinating.
Yeah, Linux/Unix are way better on both than they used to be, but on a mainframe, it's just a totally different level.
Here's a brochure that might be useful to read:
https://www.ibm.com/downloads/documents/us-en/107a02e95d48f8...
It's an IBM brochure, so naturally it's pumping mainframes, but it still has lots of interesting facts in it.
There's probably some minor strategic relevance here. E.g. for the government which has some workloads (research labs, etc.) that suit these systems, it's probably a decent idea not to try and migrate over to differently-shaped compute just to keep this CPU IP and its dev teams alive at IBM, to make sure that the US stays capable of whipping up high-performance CPU core designs even if Intel/AMD/Apple falter.
I mean, no one except for banks can afford one, let alone make back on opex or capex, and so we all resort to MySQL on Linux, but isn't the cost the only problem with them?
Banks smaller than the big ~5 in the US cannot afford anything when it comes to IT infrastructure.
I am not aware of a single state/regional bank that wants to have their IBM on premise anymore - at any cost. Most of these customers go through multiple layers of 3rd party indirection and pay one of ~3 gigantic banking service vendors for access to hosted core services on top of IBM.
Despite the wildly ramping complexity of working with 3rd parties, banks still universally prefer this over the idea of rewriting the core software for commodity systems.
ESPOL/NEWP is one of the very first systems programming languages, being safe by default, with unsafe code blocks.
The whole OS was designed with security first in mind, think Rust in 1961, thus their customers are companies that take this very seriously, not only running COBOL.
The motto is unsurpassed security.
https://www.unisys.com/product-info-sheet/ecs/clearpath-mast...
It also has a ton of high availability features and modularity that _does_ fit with legacy workloads and mainframe expectations, so I'm a little unclear who wants to pay for both sets of features in the same package.
I agree that many mainframe workloads are probably not growing so what used to require a whole machine probably fits in a few cores today.
That thing is dreadnought matmul machine with some serious uptime, and can crunch numbers without slowing down or losing availability.
or, possibly, you can implement a massively parallel version of WOPR/Joshua on it and let it rip scenarios for you. Just don't connect to live systems (not entirely joking, though).
P.S.: I'd name that version of the Joshua as JS2/WARP.
If you understand the benefits of cloud over generic x86 compute, then you understand mainframes.
Cloud is mainframes gone full circle.
(Where you can save money buying Linux or Java accelerators to run things on for free
The advantage of this model from a business operations standpoint is that you don't have to think about a single piece of hardware related to the mainframe. IBM will come out automagically and fix issues for you when the mainframe phones home about a problem. A majority of the system architecture is designed on purpose to enable seamless servicing of the machine without impacting operations.
mrweasel•5h ago
The I/O probably isn't endless networking adaptors, so what is it?
stonogo•5h ago
bitwize•5h ago
bigbuppo•4h ago
Someone•3m ago