ETA: particularly because the redundancy was supposed to make it super reliable
It reminds me of the lesson of the Apollo computers, The AGC was to more famous computer, probably rightfully so, but there were actually two computers, The other was the LVDC, made by IBM for controlling the Saturn V during launch, now it was a proper aerospace computer, redundant everything, a can not fail architecture, etc. In contrast the AGC was a toy, However this let the AGC be much faster and smaller, instead of reliability they made it reboot well, and instead of automatic redundancy they just put two of them.
https://en.wikipedia.org/wiki/Launch_Vehicle_Digital_Compute...
There is something to be learned here, I am not exactly sure what is is, worse is better?
It’s a bit sad that nobody gives a shit about performance any more. They just provision more cloud hardware. I saved telcos millions upon millions in my early career. I’d jump straight into it again if a job came up, so much fun.
It's hard to get as excited about performance when the typical family sedan has >250HP. Or when a Raspberry Pi 5 can outrun a maxxed-E10k on almost everything.
...(yah, less RAM, but you need fewer client connections when you can get rid of them quickly enough).
The Starfire started at around $800K. Our Linux servers started at around $1K. The Sun box was not 800x faster at anything than a single x86 box.
It was an impressive example of what I considered the wrong road. I think history backs me on this one.
> It’s a bit sad that nobody gives a shit about performance any more.
Everyone gives a shit about performance at some point, but the answer is horizontal scaling. You can’t vertically scale a single machine to run a FAANG. At a certain vertical scale, it starts to look a helluva lot like horizontal scaling (“how many CPUs for this container? How many drives?”), except in a single box with finite and small limits.
As a dev it isn’t your problem if the company you work for just happily provisions and sucks it up.
written in an interpreted language.
Anyway, here’s the front end SMTP servers in 1999, then in-service at 25 Broadway, NYC. I am not sure exactly which model these were, but they were BIG Iron! https://kashpureff.org/album/1999/1999-08-07/M0000002.jpg
I do know that scale-out and scale-up were used for different parts of the stack. The web services were all handled by standard x86 machines running Linux - and were all netbooted in some early orchestration magic, until the day the netboot server died. I think the rationale for the large Sun systems was the amount of Memory that they could hold - so the user name and spammer databases could be held in-memory on each front end, allowing for a quick ACCEPT or DENY on each incoming message - before saving it out to a mailbox via NFS.
A few days later, our admin noticed over the weekend that he couldn’t remote log in. He checked it out and… the machine was gone. Stolen.
Somebody within Sun must have tipped off where these things were delivered and rented a crane to undeliver them.
They would have access to site-specific info like how easy it is to get access to that server room to open the windows.
The old saying is “opportunity makes the thief.” Somebody at Sun has much less visibility into the opportunity.
They might mean from Floating Point Systems (FPS):
https://en.wikipedia.org/wiki/Cray#Cray_Research_Inc._and_Cr...
> In December 1991, Cray purchased some of the assets of Floating Point Systems, another minisuper vendor that had moved into the file server market with its SPARC-based Model 500 line.[15] These symmetric multiprocessing machines scaled up to 64 processors and ran a modified version of the Solaris operating system from Sun Microsystems. Cray set up Cray Research Superservers, Inc. (later the Cray Business Systems Division) to sell this system as the Cray S-MP, later replacing it with the Cray CS6400. In spite of these machines being some of the most powerful available when applied to appropriate workloads, Cray was never very successful in this market, possibly due to it being so foreign to its existing market niche.
Some other candidates for server and HPC expertise there (just outside of Portland proper):
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
https://en.wikipedia.org/wiki/Intel#Supercomputers
(I was very lucky to have mentors and teachers from those places and others in the Silicon Forest, and also got to use the S-MP.)
Years later, Ewald and others had a hand in destroying the Beast and Alien CPUs in favor of the good ship Itanic (for reasons).
IMO, Ewald went from company to company, leaving behind a strategic ruin or failure. Cray to SGI to Linux Networx to ...
dmd•2h ago
One time it just stopped responding, and my boss said "now, pay attention" and body-checked the machine as hard as he could.
It immediately started pinging again, and he refused to say anything else about it.
theideaofcoffee•2h ago
bionsystem•2h ago
znpy•2h ago
https://www.youtube.com/watch?v=tDacjrSCeq4
(btw it's titled "Shouting in the Datacenter")
bitwize•34m ago
defaultcompany•2h ago
linsomniac•1h ago
Somewhat related, one morning I was in the office early and an accounting person came in and asked me for help, her computer wouldn't turn on and I was the only other one in the office. I went over, poked the power button and nothing happened. This was on a PC clone. She has a picture of her daughter on top of the computer, so I picked it up, gave the computer a good solid whack on the side, sat the picture down and poked the power button and it came to life.
We call this: Percussive Engineering