Cool experiment, but is this actually a practical path forward or just a dead end with a great headline? Someone convince me I'm wrong...
If you have two systems with opposite bottlenecks you can build a composite system with the bottlenecks reduced.
But sometimes you just have to let the academics cook for a few decades and then something fantastical pops out the other end. If we ever make something that is truely AGI, its architecture is probably going to look more like this SpiNNaker machine than anything we are currently using.
There's plenty to learn from endeavors like this, even if this particular approach isn't the one that e.g. achieves AGI.
NVIDIA step up your game. Now I want to run stuff on based cores.
- 152 cores per chip, equivalent to ~128 CUDA cores per SM
- per-chip SRAM (20 MB) equivalent to SM high-speed shared memory
- per-board DRAM (96 GB across 48 chips) equivalent to GPU global memory
- boards networked together with something akin to NVLink
I wonder if they use HBM for the DRAM, or do anything like coalescing memory accesses.
Thermodynamic Computing https://knowm.org/thermodynamic-computing/
It's the most high-influence, low-exposure essay I've ever read. As far as I'm concerned, this dude is a silent prescient genius working quietly for DARPA, and I had a sneak peak into future science when I read it. It's affected my thinking and trajectory for the past 8 years
"Physics–and more broadly the pursuit of science–has been a remarkably successful methodology for understanding how the gears of reality turn. We really have no other methods–and based on humanity’s success so far we have no reason to believe we need any."
Physics, which is to say, physical methods have indeed been remarkably successful...for the types of things physical methods select for! To say it is exhaustive not only begs the question, but the claim itself is not even demonstrable by these methods.
The second claim contains the same error, but with more emphasis. This is just off-the-shelf scientism, and scientism, apart from what withering refutations demonstrate, should be obviously self-refuting. Is the claim that "we have no other methods but physics" (where physics is the paradigmatic empirical science; substitute accordingly) a scientific claim? Obviously not. It is a philosophical claim. That already refutes the claim.
Thus, philosophy has entered the chat, and this is no small concession.
Now physics vs other scientific disciplines sure. Physicists love to claim dominion just like mathematicians do. It is generally true however that physics = math + reality and that we don’t actually have any evidence of anything in this world existing outside a physical description (eg a lot of physics combined = chemistry, a lot of chemistry = biology, a lot of biology = sociology etc). Thus it’s reasonable to assume that the chemistry in this world is 100% governed by the laws of physics and transitively this is true for sociology too (indeed - game theory is one way we quantifiably explain the physical reality of why people behave the way they due). We also see this in math where different disciplines have different “bridges” between them. Does that mean they’re actually separate disciplines or just that we’ve chosen to name features on the topology as such.
It's interesting that the article doesn't say that's what it's actually going to be used for - just event driven (message passing) simulations, with application to defense.
Wasn't that the plot of the movie War Games?
If you don't handle effects in precisely the correct order, the simulation will be more about architecture, network topology and how race conditions resolve. We need to simulate the behavior of a spike preceding another spike in exactly the right way, or things like STDP will wildly misfire. The "online learning" promise land will turn into a slip & slide.
A priority queue using a quaternary min-heap implementation is approximately the fastest way I've found to serialize spikes on typical hardware. This obviously isn't how it works in biology, but we are trying to simulate biology on a different substrate so we must make some compromises.
I wouldn't argue that you couldn't achieve wild success in a distributed & more non-deterministic architecture, but I think it is a very difficult battle that should be fought after winning some easier ones.
Did Sandia pay list price? Or did SpiNNcloud Systems give it to Sandia for free (or at least for a heavily subsidsed price)? I conjecture the latter. Maybe someone from Sandia is on the list here and can provide detail?
SpiNNcloud Systems is known for making misleading claims, e.g. their home page https://spinncloud.com/ lists DeepMind, DeepSeek, Meta and Microsoft as "Examples of algorithms already leveraging dynamic sparsity", giving the false impression that those companies use SpiNNcloud Systems machines, or the specific computer architecture SpiNNcloud Systems sells. Their claims about energy efficiency (like "78x more energy efficient than current GPUs") seem sketchy. How do they measure energy consumption and trade it off against compute capacities: e.g. a Raspberry Pi uses less absolute energy than a NVIDIA Blackwell but is this a meaningful comparison?
I'd also like to know how to program this machine. Neuromorphic computers have so far been terribly difficult to program. E.g. have JAX, TensorFlow and PyTorch been ported to SpiNNaker 2? I doubt it.
One thing to remember is an operating system is just another computer program.
John Von Neumann's concept of the instruction counter was great for the short run, but eventually we'll all learn it was a premature optimization. All those transistors tied up as RAM just waiting to be used most of the time, a huge waste.
In the end, high speed computing will be done on an evolution of FPGAs, where everything is pipelined and parallel as heck.
(Sandia means watermelon in Spanish)
I also don't understand why this machine is interesting. It has a lot of RAM.... ok, and? I could get a consumer-grade PC with a large amount of RAM (admittedly not quite as much), put my applications in a ramdisk, e.g. tmpfs, and get the same benefit.
In short, what is the big deal?
realo•10h ago
Oh... 138240 Terabytes of RAM.
Ok.
jonplackett•10h ago
rzzzt•10h ago
rbanffy•10h ago
throwaway5752•9h ago
But at in this case, one wouldn't subject to macro-scale nonlinear effects arising from the uncertainty principle when trying to "restore" the system.
crtasm•10h ago
So a paltry 2,304 GB RAM
SbEpUBz2•10h ago
divbzero•8h ago
CamperBob2•7h ago
louthy•7h ago
Nevermark•4h ago
On the TRS-80 Model III, the reset button was a bright red recessed square to the right of the attached keyboard.
It was irresistible to anyone who had no idea what you were doing as you worked, lost in the flow, insensitive to the presence of another human being, until...
--
Then there was the Kaypro. Many of their systems had a bug, software or hardware, that would occasionally cause an unplanned reset the first time, after you turned it on, that you tried writing to the disk. Exactly the wrong moment.
Footpost•6h ago
https://cointelegraph.com/news/neuromorphic-computing-breakt...