However it removes attention so I think its worth watching that space of non-attention models
Also, the section on DeepSeek is really weird: "While the precise architectural details of DeepSeek LLM are still emerging, early discussions suggest that it relies on an extended Transformer backbone or a "hybrid" approach that likely incorporates some form of attention-based mechanism, potentially at specific layers or across chunk boundaries, to facilitate information flow across large contexts." It makes it sound like a mystery, even though there have been multiple papers published on it (they cite the R1 one) so that there's really no need to guess whether attention is involved.
Overall I'm not convinced the authors know what they're doing.
It's not inherently bad to use an LLM for consistency, language and overall sprucing up, but this is taking it a bit too far. It seems like they've prompted it to explain some notes, but it's unsure how well it did, since the notes themselves (i.e. data, experiments, etc) are missing. And it seems poorly prompted in that it consists of lots of fluff paragraphs, devoid of core knowledge, going round and round explaining the same concepts with different words.
In the end the responsibility for the end product is alsways on the submitter. This whole paper could have been a prompt, and it's worrying that this is accepted at such a prestigious school.
Also note, if the sequence length is not really much larger than the model dimension (at least two orders of magnitude more), the quadratic complexity of the self-attention is really not such a big issue - the matrix multiplication in the feed-forward layers will be usually 8x the model dimension squared, and thus that part will usually dominate.
Also note that there has been so much research on this already. While this particular approach might be novel, there has been attempts to avoid the O(n^2) complexity in self-attention basically almost since the original transformer paper came out in 2017. I wonder a bit that this paper does not cite xLSTM, or Block-Recurrent Transformers.
Also, this paper comes very short in experiments. There is basically only table 2. There is no study on length extrapolation (which is very relevant for the topic), or needle-in-haystack experiments, or scaling studies, any larger scale experiments, etc. Also, even in this main table 2, I see a couple of typos. And looking at the results in table 2, the improvements seems to be quite minor.
So I would conclude, this needs a lot more work.
Yes, but those are all relying on proprietary company secrets, while this is an open research paper. Besides, only Gemini so far has a context window of more than a million tokens.
I skimmed the paper, and unlike transformers they basically can scale much more efficiently with longer context. While it's possible to fit 1M token, you need a significant amount of memory. Alrhough they benchmark against GPT2, so I would say quite preliminary work so far, although promising architecture.
This is incorrect in case of batched inference. There are two bottlenecks at play: compute and memory, and your reasoning applies to compute. In case of memory it gets trickier: for MLP layers you’ll need to read same set of weights for all elements of your batch, while for kv cache for attention elements will be different. That’s why in practice the real length where attention dominates would be closer to model dimension / batch size, rather than just model dimension. And this number isn’t as high anymore.
Totally nonsensical. Deepseeks architecture is well documented, multiple implementations are available online.
On the general topic of non-attention LLMs, I recommend checking out the MesaNet [1], Rodimus [2], Gated DeltaNet [3], or Mamba2 [4]. They are currently SOTA.
However, I have yet to see a compelling non attention based model that achieves good performance on code, math, reasoning, or multi-turn QA tasks. I do not think we are getting rid of attention soon, I believe the ability to look back is crucial in certain tasks. [1] https://arxiv.org/abs/2506.05233 [2] https://arxiv.org/abs/2410.06577 [3] https://arxiv.org/abs/2412.06464 [4] https://arxiv.org/abs/2405.21060
zoklet-enjoyer•7mo ago
PaulHoule•7mo ago
Conventionally they use an attention mechanism that compares every token to every other token which has a cost of N*N or N squared which is quadratic. If you want LLMs to chew over a huge amount of context (all the source code for your project) it’s a problem so people are looking for ways around this.
zoklet-enjoyer•7mo ago
rybosome•7mo ago
This work builds a model that has the ability to “remember” parts of its previous input when generating and processing new input, and has part of its intelligence devoted to determining what is relevant to remember.
This is in lieu of kind of saying “I need to keep re-reading what I’ve already read and said to keep going”.
I’d welcome better explanations. :)
Icko_•7mo ago
yorwba•7mo ago
IIRC there are some FFT-based attention alternatives where encoding has complexity O(n log n), but there's no feasible way to cache anything and after appending a single token it costs O(n log n) again, so if you generate n tokens in sequence, the cost is actually O(n² log n).