frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/

Xata OSS: Postgres platform with branching, now Apache 2.0

https://xata.io/blog/open-source-postgres-branching-copy-on-write
1•mebcitto•1m ago•0 comments

Letting Bots Learn to Move Like Players

https://remvst.substack.com/p/letting-bots-learn-to-move-like-players
1•riidom•4m ago•0 comments

macOS Notifications for Claude Code and AeroSpace

https://kulikalov.com/claude-code-aerospace-notifications/
1•kulikalov•7m ago•0 comments

Up There with Carnegie

https://superconnectorbook.com/
1•Chrisszz•7m ago•0 comments

Show HN: Deadline.email – a daily reminder that you'll die

https://deadline.email
1•onesandofgrain•8m ago•0 comments

The Government Blacklisted the Best AI. It Came Back with the Same Red Lines

https://liminaldr.substack.com/p/the-government-tried-to-blacklist
1•BlendedPanda•10m ago•1 comments

Ask HN: What's Your Daily Routine?

1•chistev•11m ago•0 comments

Show HN: Anya – Offline static malware analysis (Rust)

https://github.com/elementmerc/anya
1•ElementMerc•16m ago•0 comments

Anchormd – Generate AI coding agent context files from any GitHub repo

https://anchormd.dev
1•aretedriver•18m ago•0 comments

The LMAX Architecture

https://martinfowler.com/articles/lmax.html
1•tosh•25m ago•1 comments

Why insects aren't huge: a new challenge to a decades-old idea

https://www.nature.com/articles/d41586-026-00976-0
2•marojejian•26m ago•1 comments

Hardware Is Hard?

https://prdpx7.github.io/posts/hardware-is-hard/
2•prdpx7•26m ago•1 comments

Show HN: JSON-logic-path – JSON logic with jsonpath multi-value resolution

https://github.com/bayinfosys/json-logic-path
1•anax32•27m ago•0 comments

Corporate Profits Are at Record Highs. These 4 Factors Could Sink Them

https://www.nytimes.com/2026/04/18/business/dealbook/corporate-profits-record.html
2•jhonovich•27m ago•0 comments

Why Mechanical Sympathy? (2011)

https://mechanical-sympathy.blogspot.com/2011/07/why-mechanical-sympathy.html
1•tosh•29m ago•0 comments

Only Law Can Prevent Extinction

https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction
2•namanyayg•29m ago•0 comments

How Long Can You Keep Peptides After Reconstitution?

https://lifeimprovementschemes.substack.com/p/how-long-can-you-keep-peptides-after
1•BenPace•29m ago•1 comments

The Fermi Paradox Is Nerdslop

https://monismos.substack.com/p/the-fermi-paradox-is-nerdslop
1•BenPace•29m ago•0 comments

I've Been Trying to Delay the Industrial Revolution (and I'm Failing)

https://lostfutures.substack.com/p/ive-been-trying-to-delay-the-industrial
1•BenPace•30m ago•0 comments

The intelligence illusion: why AI isn't as smart as it is made out to be

https://www.nature.com/articles/d41586-026-00882-5
1•gnabgib•30m ago•1 comments

Why Postgres wants NVMe on the hot path, and S3 everywhere else

https://thenewstack.io/postgres-nvme-s3-storage/
2•tanelpoder•30m ago•0 comments

Binary GCD

https://gmplib.org/manual/Binary-GCD
3•tosh•38m ago•0 comments

Young sons of legendary U.S. marshal ride horseback from Oklahoma to New York

https://texascooppower.com/the-astonishing-ride-of-the-abernathy-boys/
13•mhb•41m ago•2 comments

Thoughts and Feelings Around Claude Design

https://samhenri.gold/blog/20260418-claude-design/
2•cdrnsf•42m ago•0 comments

OpenAI Proposes a 'Social Contract' for the Intelligence Age

https://www.noemamag.com/openai-proposes-a-social-contract-for-the-intelligence-age/
1•Brajeshwar•42m ago•1 comments

Show HN: TTS.ai

https://tts.ai/
1•nadermx•43m ago•0 comments

My personal website – a start to my internet home

https://alexarias.me/
1•AlexArias•43m ago•0 comments

Vibe Genomics: Sequencing Your Whole Genome at Home

https://vibe-genomics.replit.app/
1•moozilla•43m ago•0 comments

Show HN: Trained a 12M transformer on an ML framework we built from scratch

https://github.com/mni-ml/framework
1•caliandbust•43m ago•0 comments

Trappsec – Deception as a Developer Tool

https://trappsec.dev
3•kyuradar•47m ago•1 comments