With GPUs you have all these challenges while also building a massively complicated set of custom compilers and interfaces on the software side, while at the same time trying to keep broken user software written against some other company's interface not only functional, but performant.
I couldn't find any buy it now links but 512gb sticks don't seem to be fantasies, either: https://news.samsung.com/global/samsung-develops-industrys-f...
While micron (crucial) 64GB DDR5 (SO-)DIMMs are available since few months.
Since it seems A100s top out at 80GB, and appear to start at $10,000 I'd say it's a steal
Yes, I'm acutely aware that bandwidth matters, but my mental model is the rest of that sentence is "up to a point," since those "self hosted LLM" threads are filled to the brim with people measuring tokens-per-minute or even running inference on CPU
I'm not hardware adjacent enough to try such a stunt, but there was also recently a submission of a BSD-3-Clause implementation of Google's TPU <https://news.ycombinator.com/item?id=44111452>
Bearing in mind the aforementioned "I'm not a hardware guy," my mental model of any system RAM access for GPUs is:
1. copy weights from SSD to RAM
2. trigger GPU with that RAM location
3. GPU copies weights over PCIe bus to do calculation
4. GPU copies activations over PCIe bus back to some place in RAM
5. goto 3
If my understanding is correct, this PCIe (even at 16 lanes) is still shared with everything else on the motherboard that is also using PCIe, to say nothing of the actual protocol handshaking since it's a common bus and thus needs contention management. I would presume doing such a stunt would at bare minimum need to contend with other SSD traffic and the actual graphical part of the GPU's job[1][2]Contrast this with memory socket(s) on the "GPU's mainboard" where it is, what, 3mm of trace wires away from ripping the data back and forth between its RAM and its processors, only choosing to PCIe the result out to RAM. It can have its own PCIe to speak to other sibling GPGPU setups for doing multi-device inference[3]
I would entertain people saying "but what a waste having 128GB of RAM only usable for GPGPU tasks" but if all these folks are right in claiming that it's the end of software engineering as we know it, I would guess it's not going to be that idle
1: I wish I had actually made a bigger deal out of wanting a GPGPU since for this purpose I don't care at all what DirectX or Vulkan whatever it runs
2: furthermore, if the "just use system RAM" was such a hot idea, I don't think it would be 2025 and we still have graphics cards with only 8GB of RAM on them. I'm not considering the Apple architecture because they already solder RAM and mark it up so much that normal people can't afford a sane system anyway, so I give no shits how awesome their unified architecture is
3: I also should have drew more attention to the inference need, since AIUI things like the TPUs I have on my desk aren't (able to do|good at) training jobs but that's where my expertise grinds to a halt because I have no idea why that is or how to fix it
There's actually a multitude of different ways now that each have their own performance tradeoffs like direct DMA from the Nvidia card, data copied via CPU, GPU direct storage, and so on. You seem to understand the gist though, so these are mainly implementation details. Sometimes there's weird limitations with one method like limited to Quadro, or only up to a fixed percentage of system memory.
The short answer is that all of them suck to different degrees and you don't want to use them if possible. They're enabled by default for virtually all systems because they significantly simplify CUDA programming. DDR is much less suitable than GDDR for feeding a bandwidth hungry monster like a GPU, PCI introduces high latency and further constructions, and any CPU involvement is a further slowdown. This would also apply to socketed memory on a GPU though: Significantly slower and less bandwidth.
There's also some additional downsides to accessing system RAM that we don't need to get into, like sometimes losing the benefits of caching and getting full cost memory accesses every time.
> any CPU involvement is a further slowdown. This would also apply to socketed memory on a GPU though: Significantly slower and less bandwidth
I am afraid what I'm about to say doubles down on my inexperience, but: I could have sworn that series of problems is what DMA was designed to solve: peripherals do their own handshaking without requiring the CPU's involvement (aside from the "accounting" bits of marking regions as in-use). And thus if a GPGPU comes already owning its own RAM, it most certainly does not need to ask the CPU to do jack squat to talk to its own RAM because there's no one else who could possibly be using it
I was looking for an example of things that carried their own RAM and found this, which strictly speaking is what I searched for but is mostly just funny so I hope others get a chuckle too: a SCSI ram disk <https://micha.freeshell.org/ramdisk/RAM_disk.jpg>
Also, other systems have similar technologies, I'm just mentioning Nvidia as an example.
Do compilers optimize for specific RISC-V CPUs, not just profiles/extensions? Same for drivers and kernel support.
My understanding was that if it's RISC-V compliant, no extra work is needed for existing software to run on it.
It's not that things won't run, but this is necessary for compilers to generate well optimized code.
A simple example is that the CPU might support running two specific instructions better if they were adjacent than if they were separated by other instructions ( https://en.wikichip.org/wiki/macro-operation_fusion ). So the optimizer can try to put those instructions next to each other. LLVM has target features for this, like "lui-addi-fusion" for CPUs that will fuse a `lui; addi` sequence into a single immediate load.
A more complex example is keeping track of the CPU's internal state. The optimizer models the state of the CPU's functional units (integer, address generation, etc) so that it has an idea of which units will be in use at what time. If the optimizer has to allocate multiple instructions that will use some combination of those units, it can try to lay them out in an order that will minimize stalling on busy units while leaving other units unused.
That information also tells the optimizer about the latency of each instruction, so when it has a choice between multiple ways to compute the same operation it can choose the one that works better on this CPU.
See also: https://myhsu.xyz/llvm-sched-model-1/ https://myhsu.xyz/llvm-sched-model-1.5/
If you don't do this your code will still run on your CPU. It just won't necessarily be as optimal as it could be.
For GPUs today and in the foreseeable future, there are still good reasons for them to remain discrete, in some market segments. Low-power laptops have already moved entirely to integrated GPUs, and entry-level gaming laptops are moving in that direction. Desktops have widely varying GPU needs ranging from the minimal iGPUs that all desktop CPUs now already have, up to GPUs that dwarf the CPU in die and package size and power budget. Servers have needs ranging from one to several GPUs per CPU. There's no one right answer for how much GPU to integrate with the CPU.
And for low-power consumer devices like laptops, "matrix multiplication coprocessor for AI tasks" is at least as likely to mean NPU as GPU, and NPUs are always integrated rather than discrete.
Calling something a GPU tends to make people ask for (good, performant) support for opengl, Vulkan, direct3d... which seem like a huge waste of effort if you want to be an "AI-coprocessor".
Completely irrelevant to consumer hardware, in basically the same way as NVIDIA's Hopper (a data center GPU that doesn't do graphics). They're ML accelerators that for the foreseeable future will mostly remain discrete components and not be integrated onto Xeon/EPYC server CPUs. We've seen a handful of products where a small amount of CPU gets grafted onto a large GPU/accelerator to remove the need for a separate host CPU, but that's definitely not on track to kill off discrete accelerators in the datacenter space.
> Calling something a GPU tends to make people ask for (good, performant) support for opengl, Vulkan, direct3d... which seem like a huge waste of effort if you want to be an "AI-coprocessor".
This is not a problem outside the consumer hardware market.
AI inference's big bottleneck right now is RAM and memory bandwidth, not so much compute per se.
If we redid AI inference from scratch without consumer gaming considerations then it probably wouldn't be a coprocessor at all.
A GPU needs to run $GAME from $CURRENT_YEAR at 60 fps despite the ten million SLoC of shit code and legacy cruft in $GAME. That's where the huge expense for the GPU manufacturer lies.
Matrix multiplication is a solved probelm and we need to implement it just once in hardware. At some point matrix multiplication will be ubiquitous like floating-point is now.
NVIDIA's biggest weakness right now is that none of their GPUs are appropriate for any system with a lower power budget than a gaming laptop. There's a whole ecosystem of NPUs in phone and laptop SoCs targeting different tradeoffs in size, cost, and power than any of NVIDIA's offerings. These accelerators represent the biggest threat NVIDIA's CUDA monopoly has ever faced. The only response NVIDIA has at the moment is to start working with MediaTek to build laptop chips with NVIDIA GPU IP and start competing against pretty much the entire PC ecosystem.
At the same time, all the various low-power NPU architectures have differing limitations owing to their diverse histories, and approximately none of them currently shipping were designed from the beginning with LLMs in mind. On the timescale of hardware design cycles, AI is still a moving target.
So far, every laptop or phone SoC that has shipped with both an NPU and a GPU has demonstrated that there are some AI workloads where the NPU offers drastically better power efficiency. Putting a small-enough NVIDIA GPU IP block onto a laptop or phone SoC probably won't be able to break that trend.
In the datacenter space, there are also tradeoffs that mean you can't make a one-size-fits-all chip that's optimal for both training and inference.
In the face of all the above complexity, the question of whether a GPU-like architecture retains any actual graphics-specific hardware is a silly question. NVIDIA and AMD have both demonstrated that they can easily delete that stuff from their architectures to get more TFLOPs for general compute workloads using the same amount of silicon.
But, there is much more to discrete GPUs than vector instructions or parallel cores. It's very different memory and cache systems with very different synchronization tradeoffs. It's like an embedded computer hanging off your PCI bus, and this computer does not have the same stable architecture as your general purpose CPU running the host OS.
In some ways, the whole modern graphics stack is a sort of integration and commoditization of the supercomputers of decades ago. What used to be special vector machines and clusters full of regular CPUs and RAM has moved into massive chips.
But as other posters said, there is still a lot more abstraction in the graphics/numeric programming models and a lot more compiler and runtime tools to hide the platform. Unless one of these hidden platforms "wins" in the market, it's hard for me to imagine general purpose OS and apps being able to handle the massive differences between particular GPU systems.
It would easily be like prior decades where multicore wasn't taking off because most apps couldn't really use it. Or where special things like the "cell processor" in the playstation required very dedicated development to use effectively. The heterogeneity of system architectures makes it hard for general purpose reuse and hard to "port" software that wasn't written with the platform in mind.
I wish them success, plus I hope they do not do what Intel did with its add-ons.
Hoping for an open system (which I think RISC-V is) and nothing even close to Intel ME or AMT.
https://en.wikipedia.org/wiki/Intel_Management_Engine
https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...
The architecture is independent of additional silicon with separate functions. The "only" thing which makes RISC-V open are that the specifications are freely available and freely usable.
Intel ME is, by design, separate from the actual CPU. Whether the CPU uses x86 or RISC-V is essentially irrelevant.
Are they going to make one with 16384 cores for AI / graphics or are they going to make one with 8 / 16 / 32 cores that can each execute like 20 instructions per cycle?
The biggest roadblock would be lack of support on the software side.
What it can't be is something like the Mill if they implement the RISC-V ISA.
I came to this thread looking for a comment about this. I've been patiently following along for over a decade now and I'm not optimistic anything will come from the project :(
The lack of high-performance RISC-V designs means that C/C++ compilers produce all-around good but generic code that can run on most RISC-V CPU's, from microcontrollers to a few commercially available desktops or laptops, but it can't exploit high-performance CPU design features of a specific CPU (e.g. exploit instruction timings or specific instruction sequences recommended for each generation). The real issue is that the high-performant RISC-V designs are yet to emerge.
Producing a highly performant CPU is only one part of the job, and the next part requires compiler support, which can't exist unless the vendor publishes extensive documentation that explains how to get the most out of it.
The fact that California housing pushed Intel to Oregon probably helped lead to its failures. Every time a company relocates to get cost of living (and thus payroll) costs down by relocating to a place with fewer potential employees and fewer competing employers, modernity slams on the breaks.
This wiki page has a list of Intel fab starts, you can see them being constructed in Oregon until 2013, and after that all new construction moved elsewhere. https://en.wikipedia.org/wiki/List_of_Intel_manufacturing_si...
I can imagine this slow disinvestment in Oregon would only encourage some architects to quit an found a RISC-V startup.
Arizona is also a mistake --- a far worse place for high tech than Oregon!. It is a desert real estate ponzi scheme with no top-tier schools, no history of top-tier high-skill intellectual job markets. In general the sun belt (including LA) is the land of stupid.
The electoral college is always winning out over the best economic geography, and it sucks.
Your CPU changes with every app, tab and program you open. Changing from one core, to n-core plus AI-GPU and back. This idea, that you have to write it all in stone, always seemed wild to me.
RISC-V is the fifth version of a series of academic chip designs at Berkeley (hence it's name).
In terms of design philosophy, it's probably closest to MIPS of the major architectures; I'll point out that some of its early whitepapers are explicitly calling out ARM and x86 as the kind of architectural weirdos to avoid emulating.
Says every new system without legacy concerns.
Also I don't meet to come off confrontational, I genuinely don't know
Interestingly, I recently completed a masters-level computer architecture course and we used MIPS. However, starting next semester the class will use RISC-V instead.
Given that the core motivation of RISC was to be a maximally performant design for architectures, the authors of RISC-V would disagree with you that their approach is compromising performance.
AArach64 is pretty much a completely new ISA built from ground up.
There actually have been changes for "today's needs," and they're usually things like AES acceleration. ARM tried to run Java natively with Jazelle, but it's still best to think of it as a frontend, and the fact that Android is mostly Java and ARM, but this feature got dropped says a lot.
The fact that there haven't been that many changes shows they got the fundamental operations and architecture styles right. What's lacking today is where GPUs step in: massively wide SIMD.
It exists, and was specifically designed to go wide since clock speeds have limits, bit ILP can be scaled almost infinitely if you are willing to put enough transistors into it. aarch64
So we have:
CISC – which is still used outside the x86 bubble;
RISC – which is widely used;
Hybrid RISC/CISC designs – x86 excluding, that would be the IBM z/Architecture (i.e. mainframes);
EPIC/VLIW – which has been largely unsuccessful outside DSP's and a few niches.
They all deal with registers, movements and testing the conditions, though, and one can't say that an ISA 123 that effectively does the same thing as an ISA 456 is older or newer. SIMD instructions have been the latest addition, and they also follow the same well known mental and compute models.Radically different designs, such as Intel APX 432, Smalltalk, Java CPU's, have not received any meaningful acceptance, and it seems that the idea of a CPU architecture that is tied to a higher level compute model has been eschewed in perpetuity. Java CPU's were the last massively hyped up attempt to change it, and that was 30 years ago.
What other viable alternatives outside the von Neumann architecture are available to us? I am not sure.
• at least 91 bits are used to encode the instruction
• at least 23 bits are used to encode control information associated to multiple instructions
• the remaining 14 bits appeared to be unused
AMD GPUs are similar, I believe. VLIW is good for instruction density. VLIW was unsuccessful in CPUs like Itanium because the compiler was expected to handle (unpredictable) memory access latency. This is not possible, even today, for largely sequential workloads. But GPUs typically run highly parallel workload (e.g. MatMul), and the dynamic scheduler can just 'swap out' threads that wait for memory loads. Your GPU will also perform terribly on highly sequential workloads.
[1] Z. Jia, M. Maggioni, B. Staiger, D. P. Scarpazza, Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking. https://arxiv.org/abs/1804.06826
I didn't consider GPU's precisely for the reason you mentioned – because of their unsuitability to run sequential workloads, which is most applications that end users run, even though nearly every modern computing contraption in existence has them today.
One, most assuredly, radical departure from the von Neumann architecture that I completely forgot about is the dataflow CPU architecture, which is vastly different from what we have been using in the last 60+ years. Even though there have been no productionised general purpose dataflow CPU's, it has been successfully implemented for niche applications, mostly in the networking. So, circling back to the original point raised, dataflow CPU instructions would certainly qualify for a new design.
• the "memory wall",
• the static unpredictability of memory access, and
• the lack of sufficient parallelism for masking latency.
Those make dynamically scheduling instructions is just much more efficient.
Dataflow has been tried many many many times for general-purposed workloads. And every time it failed for general-purposed workloads. In the early 2020s I was part of an expensive team doing a blank-slate dataflow architecture for a large semi company: the project got cancelled b/c the performance figures were weak relative to the complexity of micro-architecture, which was high (hence expensive verification and high area). As one of my colleagues on that team says: "Everybody wants to work on dataflow until he works on dataflow." Regarding history of dataflow architectures, [1] is from 1975, so half a century old this year.
[1] J. Dennis, A Preliminary Architecture for a Basic Data-Flow Processor https://courses.cs.washington.edu/courses/cse548/11au/Dennis...
Yet, there is something about object oriented ISA's that has made CPU designers eschew them consistently. Ranging from the Intel iAPX-432, to the Japanese Smalltalk Katana CPU, to jHISC, to another, unrelated, Katana CPU by the University of Texas and the University of Illinois, none of them have ever yielded a mainstream OO CPU. Perhaps, modern computing is not very object oriented after all.
There's plenty of people who would be fine doing unexciting dead end work if they were compensated well enough (pay, work-life balance, acknowledgement of value, etc).
This is ye olde Creative Destruction dilemma. There's too much inertia and politics internally to make these projects succeed in house. But if a startup was owned by the org and they mapped out a path of how to absorb it after it takes off they then reap the rewards rather than watch yet another competitor eat their lunch.
The only way I've seen anyone deal with this issue successfully is with rather small companies which don't have nearly as much of the whole agency cost of management to deal with.
And that's not sarcasm, I'm serious.
SFCompute
And so on … definitely not out of trend
Just as a thought experiment, consider the fact that the i80486 has 1.2 million transistors. An eight core Ryzen 9700X has around 12 billion. The difference in clock speed is roughly 80 times, and the difference in number of transistors is 1,250 times.
These are wild generalizations, but let's ask ourselves: If a Ryzen takes 1,250 times the transistor for one core, does one core run 1,250 times (even taking hyperthreading in to account) faster than an i80486 at the same clock? 500 times? 100 times?
It doesn't, because massive amounts of those transistors go to keeping things in sync, dealing with changes in execution, folding instructions, decoding a horrible instruction set, et cetera.
So what might we be able to do if we didn't need to worry about figuring out how long our instructions are? Didn't need to deal with Spectre and Meltdown issues? If we made out-of-order work in ways where much more could be in flight and the compilers / assemblers would know how to avoid stalls based on dependencies, or how to schedule dependencies? What if we took expensive operations, like semaphores / locks, and built solutions in to the chip?
Would we get to 1,250 times faster for 1,250 times the number of transistors? No. Would we get a lot more performance than we get out of a contemporary x86 CPU? Absolutely.
I should've written per core.
CPUs scaled tall with specialized instruction to make the single thread go faster, no the amount done per transistor does not scale anywhere near linearly, very many of the transistors are dark on any given cycle compared to a much simpler core that will have much higher utilization.
I'm pretty sure that these goals will conflict with one another at some point. For example, the way one solves Spectre/Meltdown issues in a principled way is by changing the hardware and system architecture to have some notion of "privacy-sensitive" data that shouldn't be speculated on. But this will unavoidably limit the scope of OOO and the amount of instructions that can be "in-flight" at any given time.
For that matter, with modern chips, semaphores/locks are already implemented with hardware builtin operations, so you can't do that much better. Transactional memory is an interesting possibility but requires changes on the software side to work properly.
That kind of takes the specter meltdown thing out of the way to some degree I would think, although privilege elevation can happen in the darndest places.
But maybe I'm being too optimistic
And from each individual core:
- 25% per core L1/L2 cache
- 25% vector stuff (SSE, AVX, ...)
- from the remaining 50% only about 20% is doing instruction decoding
Would be interesting to see a benchmark on this.
If we restricted it to 486 instructions only, I'd expect the Ryzen to be 10-15x faster. The modern CPU will perform out-of-order execution with some instructions even run in parallel, even in single-core and single-threaded execution, not to mention superior branch prediction and more cache.
If you allowed modern instructions like AVX-512, then the speedup could easily be 30x or more.
> Would we get to 1,250 times faster for 1,250 times the number of transistors? No. Would we get a lot more performance than we get out of a contemporary x86 CPU? Absolutely.
I doubt you'd get significantly more performance, though you'd likely gain power efficiency.
Half of what you described in your hypothetical instruction set are already implemented in ARM.
Clock speed is about 50x and IPC, let's say, 5-20x. So it's roughly 500x faster.
The line I was commenting on said:
> If a Ryzen takes 1,250 times the transistor for one core, does one core run 1,250 times (even taking hyperthreading in to account) faster than an i80486 at the same clock?
Emphasis added by me.
16-core Zen 5 CPU achieves more than 2 TFLOPS FP64. So number crunching performance scaled very well.
It is weird, that the best consumer GPU can 4 TFLOPS. Some years ago GPUs were an order of magnitude and more faster than CPUs. Today GPUs are likely to be artificially limited.
[1] https://www.techpowerup.com/gpu-specs/radeon-pro-vii.c3575
These aren't realistic numbers in most cases because you're almost always limited by memory bandwidth, and even if memory bandwidth is not an issue you'll have to worry about thermals. Theoretical CPU compute ceiling is almost never the real bottleneck. GPU's have a very different architecture with higher memory bandwidth and running their chips a lot slower and cooler (lower clock frequency) so they can reach much higher numbers in practical scenarios.
It's useful comparison in terms of achievable performance per transistor count.
and spending millions on patent lawsuits ...
Zen 3 example: https://www.reddit.com/r/Amd/comments/jqjg8e/quick_zen3_die_...
So, more like 85%, or around 6 orders of magnitude difference from your guess. ;)
Modern CPUs also have a lot of things integrated into the "CPU" that used to be separate chips. The i486 didn't have on-die memory or PCI controllers etc., and those things were themselves less complicated then (e.g. a single memory channel and a shared peripheral bus for all devices). The i486SX didn't even have a floating point unit. The Ryzen 9000 series die contains an entire GPU.
that's basically x86 without 16 and 32 bit support, no real mode etc.
CPU starts initialized in 64bit without all that legacy crap.
that's IMO great idea. I think every few decades we need to stop and think again about what works best and take fresh start or drop some legacy unused features.
risc v have only mandatory basic set of instructions, as little as possible to be Turing complete and everything else is extension that can be (theoretically) removed in the future.
this also could be used to remove legacy parts without disrupting architecture
For serial branchy code, it isn't a million times faster, but that has almost nothing to do with legacy and everything to do with the nature of serial code and that you can't linearly improve serial execution with architecture and transistor counts (you can sublinearly improve it), but rather with Denard scaling.
It is worth noting, though, that purely via Denard scaling, Ryzen is already >100x faster, though! And via architecture (those transistors) it is several multiples beyond that.
In general compute, if you could clock it down at 33 or 66MHz, a Ryzen would be much faster than a 486, due to using those transistors for ILP (instruction-level parallelism) and TLP (thread-level parallelism). But you won't see any TLP in a single serial program that a 486 would have been running, and you won't get any of the SIMD benefits either, so you won't get anywhere near that in practice on 486 code.
The key to contemporary high performance computing is having more independent work to do, and organizing the data/work to expose the independence to the software/hardware.
what makes it more likely to work this time?
That said, yesterday I saw gcc generate 5 KB of mov instructions because it couldn't gracefully handle a particular vector size so I wouldn't get my hopes up...
Intel 80486 with 1.2M transistors delivered 0.128 flops / cycle.
nVidia 4070 Ti Super with 45.9B transistors delivers 16896 flops / cycle.
As you see, each transistor became 3.45 times more efficient at delivering these FLOPs per cycle.
I've written about it at length and I'm sure that anyone who's seen my comments is sick of me sounding like a broken record. But there's truly a vast realm of uncharted territory there. I believe that transputers and reprogrammable logic chips like FPGAs failed because we didn't have languages like Erlang/Go and GNU Octave/MATLAB to orchestrate a large number of processes or handle SIMD/MIMD simultaneously. Modern techniques like passing by value via copy-on-write (used by UNIX forking, PHP arrays and Clojure state) were suppressed when mainstream imperative languages using pointers and references captured the market. And it's really hard to beat Amdahl's law when we're worried about side effects. I think that anxiety is what inspired Rust, but there are so many easier ways of avoiding those problems in the first place.
High bandwidth memory on-package with 352 AMD Zen 4 cores!
With 7 TB/s memory bandwidth, it’s basically an x86 GPU.
This is the future of high performance computing. It used to be available only for supercomputers but it’s trickling down to cloud VMs you can rent for reasonable money. Eventually it’ll be standard for workstations under your desk.
-
I just want to leave this breadcrumb showing possible markets and applications for high-performance computing (HPC), specifically regarding SpiNNaker which is simulating neural nets (NNs) as processes communicating via spike trains rather than matrices performing gradient descent:
https://news.ycombinator.com/item?id=44201812 (Sandia turns on brain-like storage-free supercomputer)
https://blocksandfiles.com/2025/06/06/sandia-turns-on-brain-... (working implementation of 175,000 cores)
https://www.theregister.com/2017/10/19/steve_furber_arm_brai... (towards 1 million+ cores)
https://www.youtube.com/watch?v=z1_gE_ugEgE (518,400 cores as of 2016)
https://arxiv.org/pdf/1911.02385 (towards 10 million+ cores)
https://docs.hpc.gwdg.de/services/neuromorphic-computing/spi... (HPC programming model)
I'd use a similar approach but probably add custom memory controllers that calculate hashes for a unified content-addressable memory, so that arbitrary network topologies can be used. That way the computer could be expanded as necessary and run over the internet without modification. I'd also write something like a microkernel to expose the cores and memory as a unified desktop computing environment, then write the Python HPC programming model over that and make it optional. Then users could orchestrate the bare metal however they wish with containers, forked processes, etc.
-
A possible threat to the HPC market would be to emulate MIMD under SIMD by breaking ordinary imperative machine code up into parallelizable immutable (functional) sections bordered by IO handled by some kind of monadic or one-shot logic that prepares inputs and obtains outputs between the functional portions. That way individual neurons, agents for genetic algorithms, etc could be written in C-style or Lisp-style code that's transpiled to run on SIMD GPUs. This is an open problem that I'm having trouble finding published papers for:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4611137 (has PDF preview and download)
Without code examples, I'd estimate MIMD->SIMD performance to be between 1-2 orders of magnitude faster than a single-threaded CPU and 1-2 orders of magnitude slower than a GPU. Similar to scripting languages vs native code. My spidey sense is picking up so many code smells around this approach though that I suspect it may never be viable.
-
I'd compare the current complexities around LLMs running on SIMD GPUs to trying to implement business logic as a spaghetti of state machines instead of coroutines running conditional logic and higher-order methods via message passing. Loosely that means that LLMs will have trouble evolving and programming their own learning models. Whereas HPC doesn't have those limitations, because potentially every neuron can learn and evolve on its own like in the real world.
So a possible bridge between MIMD and SIMD would be to transpile CPU machine code coroutines to GPU shader state machines:
https://news.ycombinator.com/item?id=18704547
https://eli.thegreenplace.net/2009/08/29/co-routines-as-an-a...
In the end, they're equivalent. But a multi-page LLM specification could be reduced down to a bunch of one-liners because we can reason about coroutines at a higher level of abstraction than state machines.
Once you follow the logical steps to increase utilization/efficiency you end up with something like a GPU, and that comes with the programming challenges that we have today.
In other words, it's not like CPU architects didn't think of that. Instead, there are good reasons for the status quo.
https://www.notebookcheck.net/Intel-CEO-abruptly-trashed-Roy...
How many times do we have to see these stories play out to realize it doesn’t matter where they came from. These big companies employee a lot of people of varying skill, having it on your resume means almost nothing IMHO.
Just look at the Humane pin full of “ex-Apple employees”, how’d that work out? And that’s only one small example.
I hope IO (OpenAi/Jony Ive) fails so spectacularly so that we have an even better example to point to and we can dispel the idea that if you did something impressive early in your career or worked for an impressive company, it doesn’t mean you will continue to do so.
Moreover, if the ex company was so wonderful and they were so integral to it, why aren't they still there? If they did something truly important, why not just advertise that (and I'm putting aside here qualms about overt advertising rather than something more subtle, authentic, organic).
Ocha•6mo ago