frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•8mo ago

Comments

kzawpl•8mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•8mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/

Show HN: Burnt out and failing, I built an AI that gives a shit

1•kaufy•17s ago•0 comments

Benchmark Comparison: JSONL vs. TOON output for JSON-render efficiency

https://github.com/vercel-labs/json-render/issues/33
1•lafalce•51s ago•0 comments

Pull requests with LLM attribution are predatory behavior

https://127001.me/post/llm-attribution-predatory/
1•koiueo•53s ago•0 comments

Show HN: I built an AI book recommender in 2 days

https://mynextbook.ai
1•PouyaRZ•59s ago•0 comments

Calico Basin Scrambling

https://xorvoid.com/2026_01_calico_basin_scrambling.html
1•ibobev•1m ago•0 comments

Time in C++: C++20 Brought Us Time Zones

https://www.sandordargo.com/blog/2026/01/21/clocks-part-8-cpp20-timezones
1•ibobev•1m ago•0 comments

FoundationDB's versionstamps should be everywhere

https://fragno.dev/blog/versionstamps
1•WilcoKruijer•3m ago•0 comments

Show HN: YOLO-cage – AI coding agents that can't exfiltrate secrets

https://github.com/borenstein/yolo-cage
1•borenstein•4m ago•0 comments

Everything Gen Z needs to know about the 2025 tech landscape

https://stackoverflow.blog/2026/01/14/gen-z-wrapped-2025/
1•BerislavLopac•5m ago•0 comments

Show HN: I made a roguelike game playable over SSH

https://dev-dungeon.com
1•viiralvx•5m ago•0 comments

Scott Bessent calls Denmark "irrelevant", is not concerned by Treasury sell-off

https://www.cnbc.com/2026/01/21/bessent-davos-denmark-greenland-treasuries.html
1•maxloh•5m ago•1 comments

100x a Business with AI

https://twitter.com/vasuman/status/2010473638110363839
1•gmays•6m ago•0 comments

libcurl memory use some years later

https://daniel.haxx.se/blog/2026/01/21/libcurl-memory-use-some-years-later/
3•TangerineDream•8m ago•0 comments

The Oligarchs Pushing for Conquest in Greenland

https://newrepublic.com/article/205102/oligarchs-pushing-conquest-greenland-trump
2•afavour•9m ago•0 comments

The Confabulations of Oliver Sacks

https://nautil.us/the-confabulations-of-oliver-sacks-1262447/
2•bookofjoe•9m ago•1 comments

Cognitive Collapse: A First Reconnaissance

https://www.ecosophia.net/cognitive-collapse-a-first-reconnaissance/
1•bediger4000•10m ago•0 comments

Alex Honnold did a trial climb up 101 today. Thoughts?

https://old.reddit.com/r/Taipei/comments/1qhxtk7/alex_honnold_did_a_trial_climb_up_101_today/
2•keepamovin•10m ago•0 comments

Show HN: AI 3D Camera:Transform Any Photo into a Professional Photography Studio

https://ai3dcamera.com/
1•dond1986•11m ago•0 comments

Show HN: See the carbon impact of your cloud as you code

2•hkh•13m ago•0 comments

Agentic AI and the Mythical Agent-Month

http://muratbuffalo.blogspot.com/2026/01/agentic-ai-and-mythical-agent-month.html
1•vinhnx•16m ago•0 comments

Tree CLI's plain text secrets

https://w.willx86.com/2026/01/21/tree-secrets.html
1•willx86•17m ago•0 comments

Memory supply shortfall will cause chip shortage to spread to other segments

https://www.tomshardware.com/pc-components/ram/data-centers-will-consume-70-percent-of-memory-chi...
2•walterbell•17m ago•0 comments

A Lifetime of Service

https://olly.world/a-lifetime-of-service
2•lylo•17m ago•1 comments

Show HN: An open source "Cursor for Google Sheets" with conversation memory

https://github.com/Ai-Quill/ai-sheeter
1•tuantruong•17m ago•0 comments

GongU

https://gongu.xyz
1•dwk601•18m ago•0 comments

So, why *should* GNOME support server side decorations?

https://blister.zip/posts/gnome-ssd/
1•todsacerdoti•18m ago•1 comments

YC Spring – Full-Stack AI Consulting Company

1•latmba06•18m ago•0 comments

Computational model discovers new types of neurons hidden in decade-old dataset

https://bigthink.com/neuropsych/computational-model-discovers-new-types-of-neurons-hidden-in-deca...
1•Brajeshwar•20m ago•0 comments

Webb reveals a planetary nebula with clarity, and it is spectacular

https://arstechnica.com/space/2026/01/webb-has-given-us-with-a-stunning-new-view-of-a-well-known-...
2•Brajeshwar•20m ago•0 comments

From Veritasium: What If You Keep Slowing Down?

https://www.media.mit.edu/articles/veritasium-what-if-you-keep-slowing-down/
1•Brajeshwar•20m ago•0 comments