frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/

New AI-generated videos of Iran war spread across social media

https://www.youtube.com/watch?v=0mKCBAM4wZs
1•mgh2•7m ago•0 comments

Traders place $760M bet on falling oil ahead of Hormuz announcement

https://www.reuters.com/sustainability/boards-policy-regulation/traders-place-760-million-bet-fal...
2•Jimmc414•12m ago•0 comments

Show HN: I made a calculator that works over disjoint sets of intervals

https://victorpoughon.github.io/interval-calculator/
2•fouronnes3•20m ago•1 comments

Casus Belli Engineering

https://marcosmagueta.com/blog/casus-belli-engineering/
3•b-man•22m ago•0 comments

The Bureaucrats Won't Be Toppled: Revolts No Longer Work

https://unherd.com/2025/09/why-the-bureaucrats-wont-be-toppled/
3•barry-cotter•22m ago•1 comments

Infinite Velocity

https://cube-drone.com/posts/2026/infinite_velocity/
2•tapoxi•31m ago•0 comments

Madison Square Garden's Surveillance Machine

https://www.wired.com/story/madison-square-garden-jim-dolan-surveillance-machine/
2•c420•36m ago•4 comments

Why LLMs Aren't Giving You the Result You Expect

https://akitaonrails.com/en/2026/04/15/how-to-talk-to-claude-code-effectively/
2•vinipolicena•38m ago•0 comments

The parking lot that's keeping the lights on

https://www.begiant.ca/stories/places/solar-parking-lots-energy-emissions
2•Teever•44m ago•0 comments

Reflecting on my own strange year at Uber

https://anon-ex-uber.medium.com/reflecting-on-my-own-strange-year-at-uber-e73165422245
8•anon-ex-uber•44m ago•0 comments

Education research is weak and sloppy. Why?

https://www.theargumentmag.com/p/education-research-is-weak-and-sloppy
2•barry-cotter•46m ago•0 comments

Show HN: Nilbox – Run OpenClaw without exposing your API tokens

https://nilbox.run/
2•rednakta•48m ago•0 comments

The Centaur Era

https://secondthoughts.ai/p/the-centaur-era
1•gk1•50m ago•1 comments

No one can force me to have a secure website [pdf]

https://tom7.org/httpv/httpv.pdf
1•djoldman•51m ago•0 comments

Show HN - TokensAI – Mint for AI Usage

https://tokensai.dev
1•SowjanyaY•52m ago•0 comments

GCC Compiler Adds Arm AGI CPU Target

https://www.phoronix.com/news/GCC-Arm-AGI-CPU
1•Bender•53m ago•0 comments

Linux 7.1 Crypto Code Rework Enables More Optimizations by Default

https://www.phoronix.com/news/Linux-7.1-Crypto
1•Bender•53m ago•0 comments

Wine 11.7 Brings VBScript Fixes, DirectSound 7.1 Channel Support

https://www.phoronix.com/news/Wine-11.7-Released
1•Bender•54m ago•0 comments

Show HN: Irl.rent – What renters pay, not what's listed in SF

7•rajatady•55m ago•1 comments

Show HN: PodWarden – A catalog of 9k+ self-hosted apps with one-click deploy

https://www.podwarden.com/
2•rayneclarke•57m ago•0 comments

Show HN: Llama.cpp Tutorial 2026: Run GGUF Models Locally on CPU and GPU

3•anju-kushwaha•58m ago•0 comments

Conway's Game of Life with Penrose and Voronoi tilings

https://life.au.pe/
1•satchlj•59m ago•0 comments

PyCon US 20216

https://us.pycon.org/2026/
2•vismit2000•1h ago•0 comments

Steno – Compressed memory with RAG for AI agents

https://github.com/KultMember6Banger/steno
1•KM6B•1h ago•1 comments

Zoom teams up with World to verify humans in meetings

https://techcrunch.com/2026/04/17/zoom-teams-up-with-world-to-verify-humans-in-meeting/
1•rfarley04•1h ago•3 comments

Ludum Dare will officially end in October 2028

https://www.gamedeveloper.com/production/ludum-dare-will-officially-end-in-october-2028
3•matthew_hre•1h ago•0 comments

Researchers Stole $10k from MKBHD's Locked iPhone

https://www.macrumors.com/2026/04/15/apple-pay-visa-transit-exploit/
4•zacharyozer•1h ago•0 comments

Simple Machines

https://en.wikipedia.org/wiki/Simple_machine
3•ogogmad•1h ago•0 comments

Generating Hierarchical JSON Representations of Scientific Sentences Using LLMs

https://arxiv.org/abs/2603.23532
2•PaulHoule•1h ago•0 comments

How Australia Stopped the Boats

https://worksinprogress.co/issue/how-australia-really-stopped-the-boats/
2•barry-cotter•1h ago•1 comments