I think the paged attention part is a bit oversimplified. Nice read otherwise!
ani17•1h ago
Author here. I wanted to understand what vLLM and llama.cpp are actually doing under the hood, but the codebases are massive. So I wrote a stripped down version from scratch to see the core ideas without the production complexity.
lazyMonkey69•1h ago