frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We built another object storage

https://fractalbits.com/blog/why-we-built-another-object-storage/
60•fractalbits•2h ago•9 comments

Java FFM zero-copy transport using io_uring

https://www.mvp.express/
25•mands•5d ago•6 comments

How exchanges turn order books into distributed logs

https://quant.engineering/exchange-order-book-distributed-logs.html
49•rundef•5d ago•17 comments

macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt

https://developer.apple.com/documentation/macos-release-notes/macos-26_2-release-notes#RDMA-over-...
467•guiand•18h ago•237 comments

AI is bringing old nuclear plants out of retirement

https://www.wbur.org/hereandnow/2025/12/09/nuclear-power-ai
33•geox•1h ago•25 comments

Sick of smart TVs? Here are your best options

https://arstechnica.com/gadgets/2025/12/the-ars-technica-guide-to-dumb-tvs/
433•fleahunter•1d ago•362 comments

Photographer built a medium-format rangefinder, and so can you

https://petapixel.com/2025/12/06/this-photographer-built-an-awesome-medium-format-rangefinder-and...
78•shinryuu•6d ago•9 comments

Apple has locked my Apple ID, and I have no recourse. A plea for help

https://hey.paris/posts/appleid/
865•parisidau•10h ago•445 comments

GNU Unifont

https://unifoundry.com/unifont/index.html
287•remywang•18h ago•68 comments

A 'toaster with a lens': The story behind the first handheld digital camera

https://www.bbc.com/future/article/20251205-how-the-handheld-digital-camera-was-born
42•selvan•5d ago•18 comments

Beautiful Abelian Sandpiles

https://eavan.blog/posts/beautiful-sandpiles.html
83•eavan0•3d ago•16 comments

Rats Play DOOM

https://ratsplaydoom.com/
332•ano-ther•18h ago•123 comments

Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig

https://github.com/ringtailsoftware/uvm32
167•trj•17h ago•11 comments

OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI

https://simonwillison.net/2025/Dec/12/openai-skills/
481•simonw•15h ago•271 comments

Computer Animator and Amiga fanatic Dick Van Dyke turns 100

109•ggm•6h ago•23 comments

Will West Coast Jazz Get Some Respect?

https://www.honest-broker.com/p/will-west-coast-jazz-finally-get
10•paulpauper•6d ago•2 comments

Formula One Handovers and Handovers From Surgery to Intensive Care (2008) [pdf]

https://gwern.net/doc/technology/2008-sower.pdf
82•bookofjoe•6d ago•33 comments

Show HN: I made a spreadsheet where formulas also update backwards

https://victorpoughon.github.io/bidicalc/
179•fouronnes3•1d ago•85 comments

Freeing a Xiaomi humidifier from the cloud

https://0l.de/blog/2025/11/xiaomi-humidifier/
126•stv0g•1d ago•51 comments

Obscuring P2P Nodes with Dandelion

https://www.johndcook.com/blog/2025/12/08/dandelion/
57•ColinWright•4d ago•1 comments

Go is portable, until it isn't

https://simpleobservability.com/blog/go-portable-until-isnt
119•khazit•6d ago•101 comments

Ensuring a National Policy Framework for Artificial Intelligence

https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-nati...
169•andsoitis•1d ago•217 comments

Poor Johnny still won't encrypt

https://bfswa.substack.com/p/poor-johnny-still-wont-encrypt
52•zdw•10h ago•64 comments

YouTube's CEO limits his kids' social media use – other tech bosses do the same

https://www.cnbc.com/2025/12/13/youtubes-ceo-is-latest-tech-boss-limiting-his-kids-social-media-u...
84•pseudolus•3h ago•67 comments

Slax: Live Pocket Linux

https://www.slax.org/
41•Ulf950•5d ago•5 comments

50 years of proof assistants

https://lawrencecpaulson.github.io//2025/12/05/History_of_Proof_Assistants.html
107•baruchel•15h ago•17 comments

Gild Just One Lily

https://www.smashingmagazine.com/2025/04/gild-just-one-lily/
29•serialx•5d ago•5 comments

Capsudo: Rethinking sudo with object capabilities

https://ariadne.space/2025/12/12/rethinking-sudo-with-object-capabilities.html
75•fanf2•17h ago•44 comments

Google removes Sci-Hub domains from U.S. search results due to dated court order

https://torrentfreak.com/google-removes-sci-hub-domains-from-u-s-search-results-due-to-dated-cour...
193•t-3•11h ago•34 comments

String theory inspires a brilliant, baffling new math proof

https://www.quantamagazine.org/string-theory-inspires-a-brilliant-baffling-new-math-proof-20251212/
167•ArmageddonIt•22h ago•154 comments
Open in hackernews

Cray versus Raspberry Pi

https://www.aardvark.co.nz/daily/2025/0611.shtml
173•flyingkiwi44•6mo ago

Comments

hoppp•6mo ago
The cray 1 did look futuristic like something out of star trek.

It kinda reminded me of the trash can mac. I wonder if it was inspiration for it

Mountain_Skies•6mo ago
If I ever have reason to build a Pi cluster, I'm putting in a Cray X-MP shaped case.
v9v•6mo ago
Related: a Pi Pico cluster that looks like a Cray computer https://hackaday.com/2023/04/09/parallel-computing-on-the-pi...
einsteinx2•6mo ago
> It kinda reminded me of the trash can mac. I wonder if it was inspiration for it

Ironically the trash can Mac actually looked strikingly similar in size and shape to actual small trash cans that were all over the Apple campus when I worked there. I’d see them in the cafeteria every day. They were aluminum though, but otherwise very similar. I always wondered if they had anything to do with the design of the computer, even if only subconsciously.

delichon•6mo ago
> but then again if you'd showed me an RPi5 back in 1977 I would have said "nah, impossible" so who knows?

I was reading lots of scifi in 1977, so I may have tried to talk to the pi like Scotty trying to talk to the mouse in Star Trek IV. And since you can run an LLM and text to speech on an RPi5, it might have answered.

Mountain_Skies•6mo ago
Someday real soon, kids being shown episodes of 'Knight Rider' by their grandparents won't understand why a talking car was so futuristic.
sublinear•6mo ago
Was that point not almost a decade ago?
Mountain_Skies•6mo ago
Not really. My 1983 Datsun would talk, but it couldn't converse. Alexa and Siri couldn't hold a conversation anywhere near the level KITT did. There's a big difference. With LLMs, we're getting close.
bsoles•6mo ago
Commodore 64 had text to speech in late 80s.

Also, my friend's father in the 80s was the driver of a French Consulate's member in Turkey. His car (a Renault) had speech functionality.

nereye•6mo ago
Early 80s (1982), according to Wikipedia:

https://en.m.wikipedia.org/wiki/Software_Automatic_Mouth

dahart•6mo ago
That brings back some memories. My friend and I messed around with S.A.M. on his Atari 800 a lot when we were kids. We would crank call the parents of other kids we knew and have SAM tell them their kids had skipped school and might get suspended. It was funny to some twelve year olds anyway.

SAM had a basic mode where you just type English, but it also had an advanced phonetic input mode where you could control the sound and stress on every syllable. My favorite thing to do was try to give SAM a British accent.

hulitu•6mo ago
> Commodore 64 had text to speech in late 80s.

Yes, and Windows had Narrator. And that's all. Since 20 years.

anthk•6mo ago
Test to speech it's trivial with Dr. Sbaitso or Flite in ARMv5/Pentium 90 machines.
mgerdts•6mo ago
Your car had a tiny record player.

https://www.autoweek.com/car-life/but-wait-theres-more/a1875...

KineticLensman•6mo ago
Like James Bond's Aston Martin with a satnav/tracking device in 1964's Goldfinger. Kids would know what that was but they might not understand why Bond had to continually shift some sort of stick to change the car's gear.
anthk•6mo ago
Gear shifting it's still a thing in Europe, and mandatory if you want to get your driver's license.
prmoustache•6mo ago
you can get a driver license with an automatic. But it just means you can only drive automatics.

It would have been a huge deal not being able to drive manuals 20y ago but hybrid and ev all being automatic it is not that much of a downside nowadays unless you want to buy old cars or borrow friend's car. Most renting fleets have autos available nowadays.

pjmlp•5mo ago
Our family owns a Ford Focus hybrid in Europe, I can tell you they are not all automatic.
inkyoto•6mo ago
At this point, it is a historical artefact that will cease to exist soon enough.

Electric vehicles do not have gearboxes as there are no converters, so there is nothing to shift up or down. A few performance EV's that have been announced (and maybe even have released) with a gear stick, do so for nostalgic reasons and the gear shift + the accompanying experience is simulated entirely in the software.

zelos•6mo ago
The Porsche Taycan has two forward gears, but it's apparently the only EV that does: https://www.wired.com/story/electric-car-two-speed-transmiss...
lesny_ludek•5mo ago
New Mercedes CLA has that too
pjmlp•5mo ago
At the rate EV are being sold in Europe, that soon enough is a couple of decades away.

I certainly don't plan to buy anything but hybrids, until EV prices and ranges are at comparable levels.

Our current hybrid has a six gear box.

heelix•6mo ago
The self driving aspect, amazingly, is already here and considered mundane.
DrillShopper•6mo ago
Oh really? What vehicle can I buy today, drive home, get twice the legal limit drunk, flop in the back alone to take a nap while my car drives me two hours away to a relative's house?

I'd really like to buy that car so I await your response.

tekla•6mo ago
A Tesla is pretty close. https://www.youtube.com/watch?v=4RZfkU1QgTI
more_corn•6mo ago
Tesla is in no way close.
4ndrewl•6mo ago
They're "cold-fusion" close. Which means a perpetual "few years".
bigfatkitten•6mo ago
They’ve been “close” for over a decade now.
ptero•6mo ago
That's a jurisdiction problem, not a technology problem. No tech is foolproof, but even with the current technology someone would be much safer (for others, too) in the back seat than trying to drive tired, borderline DUI at night in unfamiliar town. Which many folks regularly do, for example on business travel.

The reason I cannot do this today is laws, not technology. My 2c.

DrillShopper•6mo ago
The claim is that self driving is mundane - something everyone can have if they want. A standard feature, so entwined in the background of life that it is unremarkable.

Given that there is no system out there that I can own, jump in the back of in no condition to drive, and get to my destination safely defeats that claim. It's not even so mundane that everyone has the anemic Tesla self-driving feature that runs over kids and slams into highway barriers.

It may also be a matter of laws, but the underlying tech is also still not there given all the warnings any current "self driving car" systems give about having to pay attention to the road and keep your hands on the wheel even if the laws weren't there.

Could I get behind the wheel of my self driving car, drunk, and make it there safely? No, I definitely couldn't, and I understand why those laws exist with all of the existing failure modes of self driving cars.

People have called the current state of LLMs "sparkling AutoComplete". The current state of "self-driving cars" is "sparkling lane assist" with a chaser of adaptive cruise control.

dmd•6mo ago
The only thing stopping a Waymo from doing that is laws.
more_corn•6mo ago
You can do all that in a Waymo except for the “buy” part. When asked about that Sergey said “why do you want to own a car? You have to maintain it, insure it, park it at home and at work. Don’t you really just want to get where you’re going and have someone else figure out the rest?” This was back before google ate the evil pill. Now their philosophy is more like “don’t fall asleep, we can get a good deal on your kidneys, after that we’ll sell your mom’s kidneys too”
DrillShopper•6mo ago
I can't buy a Waymo
dizhn•6mo ago
Kitt was funny though. (For its time)
BizarroLand•5mo ago
For the time yes, but if I bought a car tomorrow and it had Kitt's sass I would cut his speakers in a week.

I don't need my car to constantly whine, needle, harass, demean, and insult me.

jjkaczor•5mo ago
Had the same thing happen when standalone GPS units were introduced to the mass market, I got one - and one of my tech friends suggested replacing the stock voice with a sarcastic/whiny one (maybe ... C3PO)...

My response was: "you obviously haven't used one yet"...

They were already bossy enough... 'MAKE A LEGAL U-TURN NOW!'"

Havoc•6mo ago
Tried explaining what a Tamagotchi was to someone recently. Looks of utter bewilderment
azeirah•6mo ago
Really? Tamagotchis seem to be one of those things that have charm beyond straight up nostalgia :o
worik•6mo ago
That is a natural reaction.
tsoukase•6mo ago
I grew up watching Kitt and when I watched it again a few days ago, I didn't feel anything. Much less my kids.
hulitu•6mo ago
> Someday real soon, kids being shown episodes of 'Knight Rider' by their grandparents won't understand why a talking car was so futuristic.

Maybe in 100 years. The talking car was more intelligent than Siri, Alexa or Hey Google.

It is not that we are not able to "talk" to computers, it is that we "talk" with computers only so that they can collect more data about us. Their "intelligence" is limited to simple text underestanding.

olddustytrail•6mo ago
I think maybe you missed the last three years. We're not talking about Alexa or Hey Google level.

We're talking about Google Gemini or ChatGPT.

qgin•6mo ago
It’s impossible to explain to kids now why it was funny on Seinfeld when Kramer pretended to be MoviePhone and says “why don’t you just tell me the name of the movie you selected!”
rahen•6mo ago
No need for an RPi 5. Back in 1982, a dual or quad-CPU X-MP could have run a small LLM, say, with 200–300K weights, without trouble. The Crays were, ironically, very well suited for neural networks, we just didn’t know it yet. Such an LLM could have handled grammar and code autocompletion, basic linting, or documentation queries and summarization. By the late 80s, a Y-MP might even have been enough to support a small conversational agent.

A modest PDP-11/34 cluster with AP-120 vector coprocessors might even have served as a cheaper pathfinder in the late 70s for labs and companies who couldn't afford a Cray 1 and its infrastructure.

But we lacked both the data and the concepts. Massive, curated datasets (and backpropagation!) weren’t even a thing until the late 80s or 90s. And even then, they ran on far less powerful hardware than the Crays. Ideas and concepts were the limiting factor, not the hardware.

adwn•6mo ago
> a small LLM, say, with 200–300K weights

A "small Large Language Model", you say? So a "Language Model"? ;-)

> Such an LLM could have handled grammar and code autocompletion, basic linting, or documentation queries and summarization.

No, not even close. You're off by 3 orders of magnitude if you want even the most basic text understanding, 4 OOM if you want anything slightly more complex (like code autocompletion), and 5–6 OOM for good speech recognition and generation. Hardware was very much a limiting factor.

rahen•6mo ago
I would have thought the same, but EXO Labs showed otherwise by getting a 300K-parameter LLM to run on a Pentium II with only 128 MB of RAM at about 50 tokens per second. The X-MP was in the same ballpark, with the added benefit of native vector processing (not just some extension bolted onto a scalar CPU) which performs very well on matmul.

https://www.tomshardware.com/tech-industry/artificial-intell...

John Carmack was also hinting at this: we might have had AI decades earlier, obviously not large GPT-4 models but useful language reasoning at a small scale was possible. The hardware wasn't that far off. The software and incentives were.

https://x.com/ID_AA_Carmack/status/1911872001507016826

adwn•6mo ago
> EXO Labs showed otherwise by getting a 300K-parameter LLM to run on a Pentium II with only 128 MB of RAM at about 50 tokens per second

50 token/s is completely useless if the tokens themselves are useless. Just look at the "story" generated by the model presented in your link: Each individual sentence is somewhat grammatically correct, but they have next to nothing to do with each other, they make absolutely no sense. Take this, for example:

"I lost my broken broke in my cold rock. It is okay, you can't."

Good luck tuning this for turn-based conversations, let alone for solving any practical task. This model is so restricted that you couldn't even benchmark its performance, because it wouldn't be able to follow the simplest of instructions.

rahen•6mo ago
You're missing the point. No one is claiming that a 300K-param model on a Pentium II matches GPT-4. The point is that it works: it parses input, generates plausible syntax, and does so using algorithms and compute budgets that were entirely feasible decades ago. The claim is that we could have explored and deployed narrow AI use cases decades earlier, had the conceptual focus been there.

Even at that small scale, you can already do useful things like basic code or text autocompletion, and with a few million parameters on a machine like a Cray Y-MP, you could reasonably attempt tasks like summarizing structured or technical documentation. It's constrained in scope, granted, but it's a solid proof of concept.

The fact that a functioning language model runs at all on a Pentium II, with resources not far off from a 1982 Cray X-MP, is the whole point: we weren’t held back by hardware, we were held back by ideas.

alganet•6mo ago
> we weren’t held back by hardware

Llama 3 8B took 1.3M hours to train in a H100-80GB.

Of course, it didn't took 1.3M hours (~150 years). So, many machines with 80GB were used.

Let's do some napkin math. 150 machines with a total of 12TB VRAM for a year.

So, what would be needed to train a 300K parameter model that runs on 128MB RAM? Definitely more, much more than 128MB RAM.

Llama 3 runs on 16GB VRAM. Let's imagine that's our Pentium II of today. You need at least 750 times what is needed to run it in order to train it. So, you would have needed ~100GB RAM back then, running for a full year, to get that 300K model.

How many computers with 100GB+ RAM do you think existed in 1997?

Also, I only did RAM. You also need raw processing power and massive amounts of training data.

rahen•6mo ago
You’re basically arguing that because A380s need millions of liters of fuel and a 4km runway, the Wright Flyer was impossible in 1903. That logic just doesn’t hold. Different goals, different scales, different assumptions. The 300K model shows that even in the 80s, it was both possible and sufficient for narrow but genuinely useful tasks.

We simply weren’t looking, blinded by symbolic programming and expert systems. This could have been a wake-up call, steering AI research in a completely different direction and accelerating progress by decades. That’s the whole point.

alganet•6mo ago
"I mean, today we can do jet engines in garage shops. Why would they needed a catapult system? They could have used this simple jet engine. Look, here is the proof, there's a YouTuber that did a small tiny jet engine in his garage. They were held back by ideas, not aerodynamics and tooling precision."

See how silly it is?

Now, focus on the simple question. How would you train the 300K model in 1997? To run it, you someone to train it first.

rahen•6mo ago
Reductio ad absurdum. A 300K-param model was small enough to be trained offline, on curated datasets, with CPUs and RAM capacities that absolutely existed at the time, especially in research centers.

Backprop was known. Data was available. Narrow tasks (completion, summarization, categorization) were relevant. The model that runs on a Pentium II could have been trained on a Cray, or across time on any reasonably powerful 90s workstation. That’s not fantasy, LeNet 5 with its 65K weight was trained on a mere Sun station in the early 90s.

The limiting factor wasn’t compute, it was the conceptual framing as well as the datasets. No one seriously tried, because the field was dominated by symbolic logic and rule-based AI. That’s the core of the argument.

alganet•6mo ago
> Reductio ad absurdum.

My dude, you came up with the Wright brothers comparison, not me. If you don't like fallacies, don't use them.

> on any reasonably powerful 90s workstation

https://hal.science/hal-03926082/document

Quoting the paper now:

> In 1989 a recognizer as complex as LeNet-5 would have required several weeks’ training and more data than were available and was therefore not even considered.

Their own words seem to match my assessment.

Training time and data availability determined how much this whole thing could advance, and researchers were aware of those limits.

fentonc•6mo ago
I think a quad-CPU X-MP is probably the first computer that could have run (not train!) a reasonably impressive LLM if you could magically transport one back in time. It supported a 4GB (512 MWord) SRAM-based "Solid State Drive" with a supported transfer bandwidth of 2 GB/s, and about 800 MFLOPS CPU performance on something like a big matmul. You could probably run a 7B parameter model with 4-bit quantization on it with careful programming, and get a token every couple seconds.
rahen•5mo ago
This sounds plausible and fascinating. Let’s see what it would have taken to train a model as well.

Given an estimate of 6 FLOPs per token per parameter, training a 7B parameter model would require about 1.26×10^22 FLOPs. That translates to roughly 500 000 years on an 800 MFLOPS X-MP, far too long to be feasible. Training a 100M parameter model would still take nearly 70 years.

However, a 7M-parameter model would only have required about six months of training, and a 14M one about a year, so let’s settle on 10 million. That’s already far more reasonable than the 300K model I mentioned earlier.

Moreover, a 10M parameter model would have been far from useless. It could have performed decent summarization, categorization, basic code autocompletion, and even powered a simple chatbot with a short context, all that in 1984, which would have been pure sci-fi back in those days. And pretty snappy too, maybe around 10 tokens per second if not a little more.

Too bad we lacked the datasets and the concepts...

JdeBP•6mo ago
You should have been watching lots of SciFi, too. (-:

I have a Raspberry Pi in a translucent "modular case" from the PiHut.

* https://thepihut.com/products/modular-raspberry-pi-4-case-cl...

It is very close to the same size and appearance as the "key" for Orac in Blake's 7.

I have so far resisted the temptation to slap it on top of a Really Useful Box and play the buzzing noise.

* https://youtube.com/watch?v=XOd1WkUcRzY

Obviously not even Avon figured out that the main box of Orac was a distraction, a fancy base station to hold the power supply, WiFi antenna, GPS receiver, and some Christmas tree lights, and all of the computational power was really in the activation key.

The amusing thing is that that is not the only 1970s SciFi telly prop that could become almost real today. It shouldn't be hard -- all of the components exist -- to make an actual Space 1999 commlock; not just a good impression of one, but a functioning one that could do teleconferencing over a LAN, IR control for doors and tellies and stuff, and remote computer access.

Not quite in time for 1999, alas. (-:

* https://mastodonapp.uk/@JdeBP/114590229374309238

Cheer2171•6mo ago
> If AI systems continue to improve at the current rate and we combine that with improvements in hardware that are measured in orders of magnitude every 15 years or so then it stands to reason that we'll get that "super-intelligent GAI" system any day now.

Oh come off it now. This could have been just a good blog post that didn't make me want to throw my phone across the room. GenAI is a hell of a drug. It's shocking how many technical professionals fall into the hype and become irrationally exuberant.

Cheer2171•6mo ago
Even if you are a GAI / super intelligence booster, the limiting factor is clearly software and data. If it is possible, the big tech AI labs already have all the compute they need to make one deployment work. Hardware is limiting for deploying at scale and at a profit.
moffkalast•6mo ago
I was more about to point out that the 10x per 15 years for hardware hardly holds anymore for silicon and it's ridiculous to expect that to continue.
Y_Y•6mo ago
> it stands to reason that

The upper-class "trust me bro"

bombcar•6mo ago
What happened to the programs/problems the Cray 1 solved? If anyone can do it on commodity hardware - is it being done? Is it all solved?
criddell•6mo ago
Most are not solved but modern systems can generate better solutions. Think about problems like forecasting weather or finite element analysis of mechanical systems.
Cheer2171•6mo ago
It was pretty basic models for tasks like weather forecasting and simulating nuclear reactions. We've come a long way on both the software modeling and hardware front.
acidburnNSA•6mo ago
We still use a lot of the same software for nuclear reactor simulations. They just run a lot faster.
cratermoon•6mo ago
Work in computational fluid dynamics is limited by computing power. Bigger and faster computers give more accuracy and speed.
whartung•6mo ago
A “famous” instance was the use of a Cray to render the collapse of Jupiter in the movie “2010”. A very early example of CGI in cinema.
johannes1234321•6mo ago
No. With more computing power the level of detail increased.

And some problems are even more complex.

My father spent his career on researching coil forms for Stellerator fusion reactors. Finding the shapes for their experiments then was a huge computational problem using then-state of the art machines (incl. cray for a while) and even today's computing power isn't there, yet.

Other problems we now solve regularly on our phones ...

benob•6mo ago
Cray1 should be compared to nowadays raspberry pi pico 2 / rp2350 which has similar specs (using external ram).
jgalt212•6mo ago
I won't rest until the average microcontroller in an optical mouse is more powerful than a Cray 1.
1oooqooq•6mo ago
try as you may, but that mouse will never work as a lounge center piece.
kdndnrndn•6mo ago
I'm not aware of any optical mouse using a general purpose MCU, to my knowledge they are all using ASICs
Rohansi•6mo ago
Some gaming mice do for running RGB lights, macros, or whatever.
sweetcocomoose•6mo ago
Nordic dominates the market for keyboards and mice. Programmable MCUs with BLE radios are required for any wireless devices.
bigfatkitten•6mo ago
There are millions, if not tens of millions of USB and PS/2 keyboards and mice out there powered by Cypress MCUs with 8051 cores.
Tajnymag•5mo ago
That's a lot of cores
qooiii2•6mo ago
A lot of touchscreens meet that requirement. Turns out it's often cheaper to solve problems with algorithms than avoid them by design.
zouhair•6mo ago
These comparisons are fun at all but a better one would be the difference between whatever "computer" a citizen lambda would have used back in the day and the cray1 and whatever on can use now and the current "cray" (or whatever humans use now) and see the difference of cost.
cratermoon•6mo ago
The first Cray-1 was installed at Los Alamos National Laboratory in 1976. That same year Gary Kildall created CP/M and Steve Wozniak completed the Apple-1.
kayodelycaon•6mo ago
I did a little poking round and I think the modern equivalent to old super computers is a mainframe. Modern super computers take up entire warehouses, cost upwards of $100 million, and are measured in exaflops.

Cray 1 costs US$7.9 million in 1977 (equivalent to $41 million in 2024) (Source: Wikipedia)

I have no idea what IBM z-series mainframes cost but I think it would be less.

$41 million can buy you one or more thousands of rack-mounted servers and the associated networking hardware.

My rough guess would be the difference in 2024 iphones to mainframes is an order of magnitude more between them than Cray and anything else on the market at the time.

It’s also interesting to note how much software has changed. The actual machine code may be less optimized, but we have better algorithms and we have the option of using vast amounts of memory and disk to save cpu time. And that’s before we get into specialized hardware.

giantrobot•6mo ago
Mainframes aren't supercomputers. The point of a mainframe (anymore) is reliable transactions without downtime. They're not necessarily beasts at computation.

Supercomputers were and are beasts of not only computation but memory size and bandwidth. They're used for tasks where the computation is highly parallel but the memory is not. If you're doing nuclear physics or fluid dynamics every particle in a simulation has some influence on every other. The more particles and more state for each particle you can store and apply to every other particle makes for a more accurate simulation.

As SCs have improved in memory size and bandwidth simulations/modeling with them has gotten more accurate and more useful.

dardeaup•5mo ago
Agreed! If you look at the branding from IBM for their various hardware lines, it's clear that they agree with you:

    zSeries: z = "zero" downtime
    iSeries: i = "integration" (DB2 baked into OS)
    pSeries: p = "performance"
ajsnigrutin•6mo ago
Hardware has gone a long way...

...software... well, that's a different story.

While a cray could compute millions of things and did a bunch of usable stuff for many groups of people who used it back then, a raspberrypi today has trouble even properly displaying a weather forecast at "acceptable speeds", because modern software has become very bloated, and that includes weather forecast sites that somehow have to include autoplaying video, usually an ad.

adgjlsfhk1•6mo ago
otoh a pi running stockfish would beat deep blue 100-0
lawik•6mo ago
No benchmarks. Hard to take this seriously.
_fat_santa•6mo ago
Reading this I wonder, say we did have a time machine and were somehow able to give scientists back in the day access to an RPI5. What sort of crazy experiments would that have spawned?

I'm sure when the Cray 1 came out, access to it must have been very restricted and there must have been hoards of scientists clamoring to run their experiments and computations on it. What would have happened if we gave every one of those clamoring scientists an RPI5?

And yes I know this raises an interface problem of how would they even use one back in the day but lets put that to the side and assume we figured out how to make an RPI5 behave exactly like a Cray 1 and allowed scientists to use it in a productive way.

maxerickson•6mo ago
Do you think they would have run experiments that have been missed in the meantime? Why?
mikewarot•6mo ago
First of all, how would they talk to it? You'd have to give them an RPI5 with serial console enabled, and strict instructions not to exceed the 3.3 volt limits of the I/O. Now it's reasonable that you could generate NTSC video out of it, so they could see on the screen any output.

When you then explained it was just bit-banging said NTSC output, they'd be amazed even more.

Aardwolf•6mo ago
Give it also an hdmi screen and usb keyboard, what more do you need to type code and see the result
dottedmag•6mo ago
Serial port

Cray 1 was released 1975, teletypes were old tech at that time.

username223•6mo ago
> What sort of crazy experiments would that have spawned?

Scientists then (at least a lot of them) knew what they wanted to do, and it required faster computers rather than more of them. A lot of that Cray power at the national labs was doing fluid simulation (i.e. nuclear explosions), and with the computers they had in the 80s, it was done in one or two dimensions, relying on symmetry. Going from n^2 to n^3 grid cells was the obvious next step, but took a lot more memory and CPU speed.

Havoc•6mo ago
Finding more & more that power efficiency is what's driving me towards new gear rather than lack of horsepower.

A few niche uses aside (gaming, llm) a vaguely modern desktop is good enough regardless of details.

dgacmu•6mo ago
Comparing against a raspberry pi 5 is kind of overkill. While a Pico 2 is close to computationally equivalent to a cray 1 now (version 2 added hardware floating point), the cray still has substantially more memory - almost 9MB vs 520k.

For parity, you have to move up to a raspberry pi zero 2, which costs $15 and uses about 2W of powerm

A million times cheaper than a cray in 2025 dollars and quite a bit more capable.

nereye•6mo ago
The memory in the Cray was external and there are RP2350 boards with 16MB of QSPI flash, here’s one of them:

https://www.olimex.com/Products/RaspberryPi/PICO/PICO2-XXL/o...

mrheosuper•6mo ago
you can add psram to rp2350, which brings it close to cray
omega3•6mo ago
Are there any details or examples of computational work the Cray 1 used for?
dahart•6mo ago
My former boss (Steve Parker, RIP) shared a story of Turner Whitted making predictions about how much compute would be needed to achieve real-time ray tracing, some time around when his seminal paper was published (~1980). As the story goes, Turner went through some calculations and came to the conclusion that it’d take 1 Cray per pixel. Because of the space each Cray takes, they’d be too far apart and he thought they wouldn’t be able to link it to a monitor and get the results in real time, so instead you’d probably have to put the array of Crays in the desert, each one attached to an RGB light, and fly over it in an airplane to see the image.

Another comparison that is equally astonishing to the RPi is that modern GPUs have exceeded Whitted’s prediction. Turner’s paper used 640x480 images. At that resolution, extrapolating the 160 Mflops number, 1 Cray per pixel would be 49 Tera flops. A 4080 GPU has just shy of 50 Tflops peak performance, so it has surpassed what Turner thought we’d need.

Think about that - not just faster than a Cray for a lot less money, but one cheap consumer device is faster than 300,000 Crays.(!) Faster than a whole Cray per pixel. We really have come a long, long way.

The 5090 has over 300 Tflops of ray tracing perf, and the Tensor cores are now in the Petaflops range (with lower precision math), so we’re now exceeding the compute needed for 1 Cray per pixel at 1080p. 1 GPU faster than 2M Crays. Mind blowing.

magicalhippo•6mo ago
> 1 Cray per pixel would be 49 Tera flops. A 4080 GPU has just shy of 50 Tflops peak performance

Interesting, wonder how it compares in terms of transistors. How many transistors combined did one Cray have in compute and cache chips?

dahart•6mo ago
The Wikipedia article says the Cray-1 has 200k gates. I assume that would mean something slightly north of 2x the number of transistors? https://en.wikipedia.org/wiki/Cray-1#Description

200k * 300k Cray-1s would be 60B gates, whereas the 4080 actually has 46B transistors. Seems like we’re totally in the right ballpark.

hattmall•6mo ago
Nice, but the ~40 year latency is kind 0f high.
msgodel•5mo ago
Well that's the way parallelism goes.
nottorp•6mo ago
But the Cray had a general purpose CPU while the GPUs have specialized hardware. Not exactly apples to apples.
monocasa•6mo ago
The main part of the Cray was a compute offload engine that asynchronously executed job lists submitted by front end general purpose computers that ran OSes like Unix.

It was actually pretty close to the model of a GPU.

phendrenad2•6mo ago
Whitted mentioned! Cofounder of the first 3d game engine company.
ForOldHack•6mo ago
Adjust the price of the Cray-1, for inflation, but not the power, for Moore's law? Need I get my napkin out for a few calculations? or do we just FORGET MOORE'S LAW ( that is mention no less that 4 times, without quantification? Cray-1 (1976 ). RPi ( 2012 ). 37 years of elapsed time. 24. 2/3 elapsed generations. 26,509,000 times increase in power. Cray 1 160Mf. In a 26M times faster, would yield 4,241Gf ( 4.2Pf) , while the PI1 is capable of 13.5Gf, so the RPi-1 ( 2012 ) is about 0.31% of where Moore's law power doubling is.

Now lets compare this to the top 500. ( see the point? )( do not speak of Moore's law, while ignoring the mathematical implications. ) ( and yes, 3/1000s is three thousandths ).

Top 500 is 1.7 Exaflops, but by Moore's law should be 4,241Gf or 4.2Xf. So the top 500 is not keeping up with Moore's law.

lttlrck•6mo ago
However Moores Law refers to the number of transistors. Not FLOPS.
grubrunner666•6mo ago
This thread should be a MasterClass. Awesome reading. Seriously. -a gen x'er.
smcameron•6mo ago
And you can 3D print a Cray YMP case for your Raspberry Pi: https://www.thingiverse.com/thing:6947303
jwr•6mo ago
What I find somewhat puzzling is that these machines were used for the "really big problems". We used supercomputers for weather forecasting, finite element simulations, molecular modeling. And we were getting results.

I don't feel we are getting results that are thousands of times better today.

motorest•6mo ago
> I don't feel we are getting results that are thousands of times better today.

You are getting results that are way better than thousands of times. You just aren't aware where they are showing up.

To give you a glimpse, the same modelling problems which a couple of decades ago tool days to come up with a crude solution are now being executed within a loop in optimization problems.

You are also seeing multiphysics and coupling problems showing up in mundane applications. We're talking about problems that augment the same modelling problems that a couple of decades ago tool days to solve with double or triple the degrees of freedom.

Without the availability of these supercomputers the size of credit cards, the whole field of computer-aided engineering would not exist.

Also, to boot, there are indeed diminished returns. Increasing computational resources unblocks constraints such as being able to use doubles instead of floats. This means that lowering numerical errors in 3 or 4 decimal places comes for free at the expense of taking around 4 times longer to solve the same problem.

To top things off, do you think the results of two decades ago were possible without employing a great deal of simplifications and crude approximations? As legend has it, the F117 Nighthawk got it's design due to the computational limits of the time. Since then, stealth planes became more performant and with a smoother design. That's what you get when your computational resources are a thousands times better.

adgjlsfhk1•6mo ago
We aren't getting results thanks of times better. we're getting results 10s of times better on problems with cubic (or worse) scaling. e.g. 3 day forecasts as of 2017 are more reliable than 1 day forecasts in 1990 https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2F...
ziofill•6mo ago
It is a frequent fantasy of mine to bring tech back to historical figures, like to show my phone to Galileo or to take Leonardo da Vinci for a ride in my car. But I guess you don't need to go that far to blow minds.
prmoustache•6mo ago
Just show it to someone being released after 30 to 40 years in jail.
Animats•6mo ago
Back in 2020, someone built a working model of a Cray-1.[1] Not only is it instruction compatible, using an FPGA, it's built into a 1/10 scale case that looks like a Cray-1.

The Cray-1 is really a very simple machine, with a small instruction set. It just has 64 of everything. It was built from discrete components, almost the last CPU built that way.

[1] https://www.cpushack.com/2010/09/15/homebrew-cray-1a-1976-vs...

Animats•5mo ago
(2010, actually.)
_tom_•6mo ago
The pi has a sub $100 accelerator card that takes it to 30 TFLOPs. So you can add three more orders of magnitude of performance for a rough doubling of the price.
dale_huevo•6mo ago
> the Cray had about 160MFLOPS of raw processing power; the Pi has... up to 30GFLOPS. Yes... that's gigaFLOPS. This makes it almost 200 times faster than the Cray.

Imagine traveling back to 1977 and explaining to someone that in 2025 we've allocated all that extra computing power to processing javascript bundles and other assorted webshit.

usrnm•6mo ago
That actually wouldn't be so bad, but in reality the number one usecase for raspberry pi is blinking leds for some time and collecting dust afterwards
darkwater•6mo ago
Still a better user than crunching Javascript to show you ads and track you around.
hagbard_c•6mo ago
No, the number one use case for Pies is being built into commercial hardware in the form of compute boards.
qingcharles•6mo ago
In 2013 I'd just built a new top-spec PC. I looked up the performance and then back-calculated using the TOP500† and I believe it would have been the most powerful supercomputer in the world in about 1993. If you back-calculated further, I think around 1980 it became more powerful than every computer on the planet combined.

† https://en.wikipedia.org/wiki/TOP500

noobermin•6mo ago
I guess I'm old because this hasn't really been that insightful of interesting observation just by itself anymore. People often talk about technological advancement of computing as if it is a force of nature whereas the amazing specs of say a rp2350 compared to the cray-1 is more of a story of the economies of scale as opposed to merely technical know-how and design. The reason a rp2350 is a few dollars is because of fabs, infrastructure, and institutional knowledge that likely dwarf the cost of producing a cray-1. I wouldn't even be surprised if someone bothered to do a similar calculation of the cost of infrastructure needed behind each cray-1 at the time that it could even be less what is needed to produce rp2350s today. The unit price of a rp2350 to consumers being so cheap (right now that fabs still want to make it) somewhat elides the actual costs involved.

Animats below said that the Cray-1 was made from discrete components. Good luck making a rp2350 from discrete components, it likely wouldn't even function well at the desired frequency due to speed of light and RF interference issues--it would likely be even worse for GHz broadcoms used in the rpi5. This means that in a post-apocolyptic future you could make another cray-1 given enough time and resources. In 20 years when the fabs have stopped making rp2350s there simply will not be any more of them.

adgjlsfhk1•6mo ago
I think the really interesting post here is that a reasonably high level of computer is basically free. you can get a 32 but microcontroller with 16mb of ram at above 100mhz for well under $1. you can buy a USB cable and it has 2 full computers inside it.
mrheosuper•6mo ago
where do i find a MCU with 16MB ram for under $1 ?
adgjlsfhk1•6mo ago
dammit typo. that was supposed to be 16kb
username223•6mo ago
That was a weird turn to AI at the end, but otherwise an interesting reflection. I'm a little too young to have grown up in the era of the Cray-1, but even in the early 90s, processors ran at 90 MHz and hard drives cost $1 per megabyte. Back when personal computers ran at single-digit megahertz and had kilobytes of RAM, a Cray was mind-blowing.

The exciting part back then was that, while computers were never "good enough," they were getting noticeably better every few months. If you were in the market for a computer, you knew you could get a noticeably better one for the same price if you just waited a little while. The next model was exciting, because it was tangibly better. At some point personal computers became "good enough" for most people. Other than compensating for creeping software bloat, there hasn't been much reason for most people to be excited about new computers in a decade or more.

qgin•6mo ago
Yes but can you sit on your Raspberry Pi like this https://volumeone.org/uploads/image/article/005/898/5898/hea...
pjmlp•6mo ago
And then people burn all this progress shipping browsers with their application, because they can't be bothered to learn neither Web standards, nor native tooling.
nkotov•5mo ago
You can see a Cray-1 at CHM. It looks so cool that pictures don't do it justice.
daotoad•5mo ago
This little piece reminds me of a conversation I had about the power of modern mobile phones.

Given a modern flagship phone, what year was that phone equivalent to total world computational power?

For example, based on TFA, a Pi5 represents a around a thousand Cray 1 systems in 1977. Based on that seems likely that a single Pi5 outstrips total world supercomputer capacity in 1977.

We tossed some numbers around, but the rough consensus was that the were likely to have the 60s covered and most of the 70s as well. Given that this was a decade ago, I expect that we could move forward a few years.

ninetyninenine•5mo ago
Let’s hope this happens for LLM hardware.