This is pretty remarkable. We've spent a lot of time finding workarounds for LLMs reading long docs. Now that's gone.
wilddolphin•1h ago
optimizing AI in general. How cool is that?
remaximize•11m ago
the next wave of AI, very cool
williamimoh•50m ago
Looks like long context isn’t a problem anymore
tamarru•43m ago
Neither is cost, and latency, in the long-term. LLMs ultimately become more economically viable than they are now, and broaden the scope of every existing LLM-driven application (particularly STS, conversational AI, etc, etc.)
pstorm•27m ago
I’m very surprised this isn’t getting more attention. Am I missing something?
It seems at or above SOTA on the given benchmarks, doesn’t have context rot, is orders of magnitude faster, and uses less compute that current transformer models. I suppose it’s just an announcement and we can’t test it ourselves yet.
jakevoytko•18m ago
The proof is in the pudding. At this point, there have been plenty of models that overperformed on benchmarks and underperformed on real work. So my stance is that I'm curious, I'm excited to see where it goes, and I don't believe it until I can try it.
remaximize•12m ago
I agree, it's a real architectural breakthrough if true
remaximize•1h ago