frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•10mo ago

Comments

kzawpl•10mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•10mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/

An AI that plans multi-city trips in seconds. CRAZY product

https://explorinder.com/
1•pabloceg•16s ago•0 comments

Public Memories – Comedy Skits from Krazam [video]

https://www.youtube.com/watch?v=AS9y-d2BvZU
1•nvader•18s ago•0 comments

1M context is now generally available for Opus 4.6 and Sonnet 4.6

https://claude.com/blog/1m-context-ga
1•meetpateltech•1m ago•0 comments

Show HN: I Made a PS1 Static Recompiler with No Prior Experience (and Claude)

https://1379.tech/i-built-a-ps1-static-recompiler-with-no-prior-experience-and-claude-code/
1•Gamemaster1379•1m ago•0 comments

Yes, and

https://en.wikipedia.org/wiki/Yes,_and_...
1•lucidplot•1m ago•0 comments

Einstein's Riddle – Who owns the fish?

https://www.numericana.com/answer/recreational.htm#einstein5
1•v8xi•2m ago•0 comments

The Cost of Delegation

https://variantsystems.io/blog/cost-of-delegation
1•vipulbhj•2m ago•0 comments

Webhook Architecture – Design Pattern

https://beeceptor.com/docs/webhook-feature-design/
1•ankit84•3m ago•0 comments

Show HN: TheDayAfter – open-source addiction recovery tracker

https://thedayafter.app/
1•walky•3m ago•0 comments

Claude Opus 4.6 now ships with 1M context by default

https://twitter.com/alasano/status/2032505230386958546
2•alasano•3m ago•1 comments

Scanner Raises Series A Led by Sequoia Capital

https://scanner.dev/blog/scanner-raises-series-a-led-by-sequoia-capital
1•eatonphil•6m ago•0 comments

Vibe Crafting: spec and test-driven agent development

https://github.com/scosman/vibe-crafting
1•scosman•6m ago•0 comments

Who Uses AI in Congress?

https://nicholasdecker.substack.com/p/who-uses-ai-in-congress
1•speckx•7m ago•0 comments

50x Faster Post-Training

https://www.workshoplabs.ai/blog/post-training-50x-faster
2•addiefoote8•8m ago•0 comments

Show HN: RepoCrunch – CLI to analyze GitHub repos

https://github.com/kimwwk/repocrunch
1•chillkim•9m ago•1 comments

How Russia's new elite hit squad was compromised by Google Translate

https://theins.press/en/inv/290235
3•dralley•10m ago•0 comments

Notes from the trough of sorrow: why we killed our own product

1•timshell•11m ago•0 comments

Q&A: Why does gas set the price of electricity – and is there an alternative?

https://www.carbonbrief.org/qa-why-does-gas-set-the-price-of-electricity-and-is-there-an-alternat...
2•mariuz•11m ago•0 comments

Rescreen: Give agents control of your screen, securely

https://github.com/ygwyg/rescreen
1•handfuloflight•12m ago•0 comments

Depot Raised a $10M Series A

https://depot.dev/blog/depot-raises-series-a
2•eatonphil•13m ago•0 comments

How Predictable Are the Oscars?

https://futuresearch.ai/oscars/
6•nbosse•16m ago•2 comments

Revealed: Face of 75,000-year-old female Neanderthal from cave

https://www.cam.ac.uk/stories/shanidar-z-face-revealed
4•thunderbong•17m ago•0 comments

AI agent 'lobster fever' grips China despite risks

https://techxplore.com/news/2026-03-ai-agent-lobster-fever-china.html
2•Brajeshwar•17m ago•0 comments

LDP: Identity-Aware Routing for Multi-Agent LLMs – 37% Less Tokens

https://arxiv.org/abs/2603.08852
1•prakashsunil•19m ago•0 comments

When code is free, research is all that matters

https://twitter.com/amytam01/status/2031072399731675269
1•gmays•19m ago•0 comments

Scaling ClickHouse to petabytes of AI observability data

https://langfuse.com/blog/2026-03-10-simplify-langfuse-for-scale
3•marcklingen•20m ago•0 comments

Self-Driving Corporations (2020)

https://interconnected.org/home/2020/11/17/self_driving_corporations
1•alcazar•20m ago•0 comments

The Colorado River Does Not Reach 2030

https://drlennecefer.substack.com/p/the-colorado-river-does-not-reach
3•ThemalSpan•21m ago•0 comments

I built a GDPR analytics alternative to Google Analytics

https://eurometrics.eu
1•snesmachny•21m ago•0 comments

Lost in Backpropagation: The LM Head Is a Gradient Bottleneck

https://arxiv.org/abs/2603.10145
1•famouswaffles•21m ago•0 comments