What they seem to want is fast-read, slow-write memory. "Primary
applications include model weights in ML inference, code
pages, hot instruction paths, and relatively static data pages".
Is there device physics for cheaper, smaller fast-read slow write memory cells
for that?
For "hot instruction paths", caching is already the answer. Not sure about locality of reference for model weights. Do LLMs blow the cache?
toast0•36m ago
Probably not what they want, but NOR flash is generally directly addressable, it's commonly used to replace mask roms.
bobmcnamara•28m ago
NOR is usually limited to <30MHz, but if you always want to fetch an entire cacheline, and design the read port, you can fetch the entire cacheline at once so that's pretty neat.
I don't know if anyone has applied this to neutral networks.
bobmcnamara•33m ago
> Do LLMs blow the cache?
Sometimes very yes?
If you've got 1GB of weights, those are coming through the caches on their way to execution unit somehow.
Many caches are smart enough to recognize these accesses as a strided, streaming, heavily prefetchable, evictable read, and optimize for that.
Many models are now quantized too to reduce the overall the overall memory bandwidth needed for execution, which also helps with caching.
photochemsyn•13m ago
Yes, this from the paper:
> "The key insight motivating LtRAM is that long data lifetimes and read heavy access patterns allow optimizations that are unsuitable for general purpose memories. Primary applications include model weights in ML inference, code pages, hot instruction paths, and relatively static data pages—workloads that can tolerate higher write costs in exchange for lower read energy and improved cost per bit. This specialization addresses fundamental mismatches in current systems where read intensive data competes for the same resources as frequently modified data."
Essentially I guess they're calling for more specific hardware for LLM tasks, much like was done with all the networking equipment for dedicated packet processing with specialized SRAM/DRAM/TCAM tiers to keep latency to a minimum.
While there's an obvious need for this for traffic flow across the internet, whether or not LLMs are really going to scale like that, or there's a massive AI/LLM bubble about to pop, would be the practical issue, and who knows? The tea leaves are unclear.
Grosvenor•24m ago
I'll put the Tandem 5 minute rule paper here, it seems very relevant.
I'm not seeing the case for adding this to general-purpose CPUs/software. Only a small portion of software is going to be able to be properly annotated to take advantage of this, so it'd be a pointless cost for the rest of users. Normally short-term access can easily become long-term in the tail the process gets preempted by something higher priority or spend a lot of time on an I/O operation. It's also not clear why if you had an efficient solution for the short-term case you wouldn't just add a refresh cycle and use it in place of normal SRAM as generic cache? These make a lot more sense in a dedicated hardware context -- like neural nets -- which I think is the authors' main target here.
Animats•58m ago
For "hot instruction paths", caching is already the answer. Not sure about locality of reference for model weights. Do LLMs blow the cache?
toast0•36m ago
bobmcnamara•28m ago
I don't know if anyone has applied this to neutral networks.
bobmcnamara•33m ago
Sometimes very yes?
If you've got 1GB of weights, those are coming through the caches on their way to execution unit somehow.
Many caches are smart enough to recognize these accesses as a strided, streaming, heavily prefetchable, evictable read, and optimize for that.
Many models are now quantized too to reduce the overall the overall memory bandwidth needed for execution, which also helps with caching.
photochemsyn•13m ago
> "The key insight motivating LtRAM is that long data lifetimes and read heavy access patterns allow optimizations that are unsuitable for general purpose memories. Primary applications include model weights in ML inference, code pages, hot instruction paths, and relatively static data pages—workloads that can tolerate higher write costs in exchange for lower read energy and improved cost per bit. This specialization addresses fundamental mismatches in current systems where read intensive data competes for the same resources as frequently modified data."
Essentially I guess they're calling for more specific hardware for LLM tasks, much like was done with all the networking equipment for dedicated packet processing with specialized SRAM/DRAM/TCAM tiers to keep latency to a minimum.
While there's an obvious need for this for traffic flow across the internet, whether or not LLMs are really going to scale like that, or there's a massive AI/LLM bubble about to pop, would be the practical issue, and who knows? The tea leaves are unclear.