frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•10mo ago

Comments

kzawpl•10mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•10mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/

PDF Tools

https://www.pdffixnow.com
1•instahotstar•1m ago•0 comments

Comfy.org

https://blog.comfy.org/
1•VanessaMGSA•2m ago•0 comments

Show HN: My OpenClaw knows what it did a week ago. Thanks to "hmem"-MCP

1•Bumblebiber•5m ago•0 comments

Africa Imported Europe's Worst Idea

https://magatte.substack.com/p/how-africa-imported-europes-worst
1•EvgeniyZh•5m ago•0 comments

Anthropic's Feud with Pentagon Earns It Fans Amid the Blowback

https://www.wsj.com/tech/ai/anthropics-feud-with-pentagon-earns-it-fans-amid-the-blowback-f7e2bb83
1•JumpCrisscross•7m ago•0 comments

KlongPy: Automatic Differentiation

http://www.klongpy.org/torch_backend/
1•tosh•8m ago•0 comments

Sam Altman: We have been working with the Dow to make our principles clear

https://twitter.com/i/status/2028640354912923739
2•matthieu_bl•8m ago•0 comments

How well do you know Claude Code?

https://claude-code.vercel.app/test
2•Krishnaa_•10m ago•0 comments

When "More" Makes the System Worse

https://kb-it.net/when_more_makes_the_system_worse/
1•better-it•11m ago•0 comments

Merrilin – We built an app to read books

https://tech.stonecharioteer.com/posts/2026/merrilin/
1•two_poles_here•12m ago•0 comments

Sandboxing Like a Pro in the Age of GasTown

https://github.com/avkcode/firecracker-sandbox
1•KyleVlaros•12m ago•0 comments

How to Recover Stolen Cryptocurrency and USDT

https://www.autopsymainnetsolutions.com
1•SAMUELluck•14m ago•0 comments

Another round of reporting on feed readers

https://rachelbythebay.com/w/2026/02/23/readers/
1•theshrike79•15m ago•0 comments

The Worst Language Won

https://theoryvc.com/blog-posts/the-worst-language-won
1•taubek•17m ago•0 comments

Arm's Cortex X925: Reaching Desktop Performance

https://chipsandcheese.com/p/arms-cortex-x925-reaching-desktop
4•ingve•24m ago•0 comments

Odd Lots, some guests are more perfect than others

https://networked.substack.com/p/on-odd-lots-some-guests-are-more
1•jaypinho•26m ago•1 comments

glFTPD

https://glftpd.io/
1•metadat•28m ago•0 comments

The Hacker Times

https://the-hacker-times.examples.workers.dev
1•fayazara•29m ago•1 comments

Fundamentals for Using Hyperspectral and Thermal Earth Observation Data (Day 1) [video]

https://www.youtube.com/watch?v=O6uSkvT8Zr0
1•marklit•31m ago•0 comments

HyperCard Changed Everything [video]

https://www.youtube.com/watch?v=hxHkNToXga8
1•adfm•32m ago•0 comments

Latest ToS update includes class action waiver and forced arbitration

https://github.com/zed-industries/zed/issues/50568
2•database64128•35m ago•0 comments

Myrient will shut down on 31 March 2026. Download any content you find important

https://myrient.erista.me
1•chaifeng•38m ago•0 comments

Neural-Temporal Compression – A State-Persistence Framework

https://github.com/andresuarus10-byte/memory-engine
1•KaelyrAT13•41m ago•2 comments

Show HN: A Calculator for Garden Horizons

https://gardenhorizons.app/
1•hugh1st•41m ago•0 comments

Doing a Video Call over a Database

https://www.youtube.com/watch?v=zwIc9fFcYVw
1•Jacques2Marais•44m ago•0 comments

Superagers' Secret Ingredient May Be the Growth of New Brain Cells

https://www.sciencealert.com/superagers-secret-ingredient-may-be-the-growth-of-new-brain-cells
1•jnord•45m ago•0 comments

Fooling Go's X.509 Certificate Verification

https://danielmangum.com/posts/fooling-go-x509-certificate-verification/
1•hasheddan•47m ago•0 comments

'To be free, we have to be feared,' Macron says in keynote nuclear speech

https://www.france24.com/en/france/20260302-macron-unveils-france-nuclear-strategy-eu-counter-rus...
2•vrganj•47m ago•0 comments

I built a pint-sized Macintosh

https://www.jeffgeerling.com/blog/2026/pint-sized-macintosh-pico-micro-mac/
7•ingve•53m ago•0 comments

Ask HN: How to get traction for Open-Source Projects

1•human_hack3r•54m ago•0 comments