I've read the attention paper, watched Karpathy's own YouTube lectures, implemented toy transformers in notebooks. But reading through this C implementation I finally grokked why KV-cache matters at the systems level - because when you see the memory layout spelled out in raw mallocs instead of hidden behind PyTorch abstractions, the O(n^2) vs O(n) tradeoff for cached inference becomes visceral rather than theoretical.
This is the same reason people learn more about operating systems from xv6 than from reading the Linux source. Minimal implementations aren't just educational toys - they're the only way to build intuition about what the abstractions are actually doing. The fact that a full GPT-2 fits in ~1000 lines of C should make every ML engineer uncomfortable about how much complexity their frameworks are hiding from them.
Can you explain this O(n2) vs O(n) significance better?
With KV cache you store the key/value vectors from previous tokens, so when generating token n you only compute the new token's query against the cached keys. Each step is O(n) instead of recomputing everything, and total work across all steps drops to O(n^2) in theory but with way better constants because you're not redoing matrix multiplies you already did.
The thing that clicked for me reading the C code was seeing exactly where those cached vectors get stored in memory and how the pointer arithmetic works. In PyTorch it's just `past_key_values` getting passed around and you never think about it. In C you see the actual buffer layout and it becomes obvious why GPU memory is the bottleneck for long sequences.
tithos•1h ago
aaronblohowiak•1h ago
pixelatedindex•45m ago
geerlingguy•1h ago
foodevl•45m ago
antonvs•1h ago
Seriously though, despite being described as an "art project", a project like this can be invaluable for education.
keyle•1h ago
jackblemming•58m ago
inerte•45m ago