frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•1y ago

Comments

kzawpl•1y ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•1y ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/

Cisco open sources toolkit for tracing AI model lineage

https://blogs.cisco.com/ai/model-provenance-kit
1•hsanthan•3m ago•0 comments

Swival: A coding agent for any model

https://swival.dev/
1•handfuloflight•3m ago•0 comments

Uantifying Voter Biases in Online Platforms: An Instrumental Variable Approach

https://arxiv.org/abs/1910.00757
1•smooke•4m ago•0 comments

Steep fertilizer and fuel prices could squeeze US farmers for months to come

https://www.wpr.org/news/steep-fertilizer-fuel-prices-squeeze-us-farmers-months-come
1•_tk_•5m ago•0 comments

Show HN: Vanilla-scroll-sky: CSS-only modern scroll-driven storytelling sections

https://github.com/ulrischa/vanilla-scroll-sky
1•ulrischa•5m ago•0 comments

Migrating from Supabase

https://blog.val.town/blog/migrating-from-supabase/
2•gurjeet•8m ago•0 comments

Do we even need a better GitHub?

https://www.aviator.co/blog/do-we-even-need-a-better-github/
2•tonkkatonka•8m ago•0 comments

Stable Specialization in Rust

https://goldstein.lol/posts/stable-specialization/
1•PaulHoule•9m ago•0 comments

Claude will use all SpaceX Colossus datacenter capacity

https://twitter.com/NVIDIAAI/status/2052082412994383936
3•kristianpaul•11m ago•1 comments

When do we know someone has died

https://blog.computationalcomplexity.org/2026/05/when-do-we-know-someone-has-died.html
1•speckx•12m ago•0 comments

Multipath Reliable Connection spec published

https://www.opencompute.org/documents/ocp-mrc-1-0-pdf
1•jabl•12m ago•0 comments

Olaf: Bringing an Animated Character to Life in the Physical World

https://arxiv.org/abs/2512.16705
1•programd•12m ago•0 comments

Bell Laboratories Record (August 1941) [pdf]

https://www.worldradiohistory.com/Archive-Bell-Laboratories-Record/40s/Bell-Laboratories-Record-1...
2•zuhayeer•13m ago•0 comments

MIT’s virtual violin offers luthiers a new design tool

https://arstechnica.com/science/2026/05/mits-virtual-violin-offers-luthiers-a-new-design-tool/
2•smushy•13m ago•0 comments

Free and Simple Chess Analysis

https://www.g6chess.com/
1•mantegna•14m ago•1 comments

Supercomputer networking to accelerate large scale AI training

https://openai.com/index/mrc-supercomputer-networking/
1•dataking•14m ago•0 comments

xAI will be dissolved as a separate company

https://twitter.com/elonmusk/status/2052105373621121284
1•break_the_bank•15m ago•0 comments

Learning Advanced JavaScript (2008)

https://johnresig.com/apps/learn/
1•downbad_•17m ago•1 comments

Gypsy Woman Hardware Live Jam (2023) [video]

https://www.youtube.com/watch?v=_SSXALxZ3Hs
1•elvis70•18m ago•0 comments

Mainframe modernization is no longer optional for the AI-driven enterprise

https://thenewstack.io/open-mainframe-enterprise-modernization/
2•rbanffy•20m ago•0 comments

Let's Get EFF to Accept Monero Donations

https://monerocoalition.org/lets-get-eff-to-accept-monero-donations/
4•Cider9986•21m ago•0 comments

You can make more money buying MTG cards than the lottery

https://meadow.cafe/blog/0073-you-can-make-more-money-buying-mtg-cards-than-the-lottery/
2•speckx•24m ago•0 comments

Go-joker – a much faster Clojure interpreter written in Go and WASM

https://rcarmo.github.io/projects/go-joker/
6•rcarmo•24m ago•0 comments

ZAYA1-8B: Frontier intelligence density, trained on AMD

https://www.zyphra.com/post/zaya1-8b
3•mseri•26m ago•0 comments

Shadow – find which prompt change broke your AI agent

https://github.com/manav8498/Shadow
2•manav8498•26m ago•0 comments

Upcoming El Niño: The World Is About to Get a Preview of Life in 2035

https://www.nytimes.com/2026/05/06/opinion/el-nino-climate.html
5•puttycat•28m ago•0 comments

Planting Trees and Dreaming of Software

https://jerodsanto.net/2026/05/planting-trees-software-dreams/
1•herbertl•29m ago•0 comments

Hackers Hate AI Slop More Than You Do

https://www.wired.com/story/cybercriminals-are-complaining-about-ai-slop-flooding-their-forums/
9•aledevv•31m ago•1 comments

An aggregate of payment usage data released by businesses that accept Monero

https://monerostats.org/
1•Cider9986•31m ago•0 comments

A Fundamental FX Factor Model

https://dm13450.github.io/2026/04/19/A-Fundamental-FX-Factor-Model.md.html
1•dm13450•31m ago•0 comments