ARM processors primarily use a modified Harvard architecture, including the raspberry pi pico.
I think this post is more about... compute in memory? if I got it right?
This is, IIRC, part of why Apple's M-series chips are as performant as they are: they not only have a unified memory architecture which eliminates the need to copy data from CPU main memory to GPU or NPU main memory to operate on it (and then copy the result back) but the RAM being on the package means that it's slightly "more local" and the memory channels can be optimized for the system they're going to be connected to.
Edit: see also ARM7TDMI, Cortex-m0/0+/1, and probably a few others. All the big stuff is modified Harvard or very rarely pure Harvard.
Faster interconnects are always nice, but this is more like routine improvement.
It's also fascinating that they are experimenting with analog memory because it pairs so well with model weights
This is being done, with great results so far. As models get better, architecture search and creation and refinment improves, driving a reinforcement loop. At some point in the near future the big labs will likely start seeing significant returns from methods like this, translating into better and faster AI for consumers.
https://www.science.org/doi/full/10.1126/science.adh1174
Also they've been working on this for 10+ years so it's not exactly new news.
Which is fine! I am all for iterative improvements, it’s how we got to where we are today. I just wish more folks would start openly admitting that our current architecture designs are broadly based off “low hanging fruit” of early electronics and microprocessors, followed by a century of iterative improvements. With the easy improvements already done and universally integrated, we’re stuck at a crossroads:
* Improve our existing technologies iteratively and hope we break through some barrier to achieve rapid scaling again
OR
* Accept that we cannot achieve new civilizational uplifts with existing technologies, and invest more capital into frontier R&D (quantum processing, new compute substrates, etc)
I feel like our current addiction to the AI CAPEX bubble is a desperate Hail Mary to validate our current tech as the only way forward, when in fact we haven’t really sufficiently explored alternatives in the modern era. I could very well be wrong, but that’s the read I get from the hardware side of things and watching us backslide into the 90s era of custom chips to achieve basic efficiency gains again.
mwkaufma•1h ago
bahmboo•49m ago