frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Missing Half of a Daily Planner

https://www.finalist.works/the-missing-half-of-a-daily-planner/
1•slaven•24s ago•1 comments

Comic Sans Is Great and I'm Tired of Pretending It's Not

https://blog.absurdpirate.com/comic-sans-is-great-and-im-tired-of-pretending-its-not/
1•speckx•1m ago•0 comments

AI facial recognition oversight lagging far behind technology, watchdogs warn

https://www.theguardian.com/technology/ng-interactive/2026/may/03/ai-facial-recognition-oversight...
1•Brajeshwar•2m ago•0 comments

Show HN: Vibe-coding video games with Claude (Day 21: Blackjack)

https://gamevibe.us/21-blackjack
1•pzxc•2m ago•0 comments

Clone Talking – Real-Time Voice Conversations with AI Persona Clones

https://github.com/MatheusSimonaci/clone-talking
1•MatheusSimonaci•3m ago•0 comments

Show HN: I made a private/local audio transcriber

https://www.usewhispy.com/
1•fzsheikdev•3m ago•0 comments

US healthcare marketplaces shared citizenship and race data with ad tech giants

https://techcrunch.com/2026/05/04/us-healthcare-marketplaces-shared-citizenship-and-race-data-wit...
1•ZeidJ•4m ago•1 comments

What is .NET, and why should you choose it?

https://devblogs.microsoft.com/dotnet/why-dotnet/
1•Brysonbw•5m ago•0 comments

Blueprint Bench: First signs of 3D spatial intelligence in LLMs

https://andonlabs.com/evals/blueprint-bench-2
1•lukaspetersson•5m ago•0 comments

Verifying Poseidon in Clean: Why the Last 'Sorry' Is About Primality

https://blog.zksecurity.xyz/posts/poseidon-clean/
1•martocho•9m ago•0 comments

The Record of a Sonnet Drift

https://twitter.com/diandianhsutw/status/2051302622708318212
1•WLHsu•9m ago•1 comments

Stop big tech from making users behave in ways they don't want to

https://economist.com/by-invitation/2026/04/29/stop-big-tech-from-making-users-behave-in-ways-the...
2•andsoitis•9m ago•0 comments

Oomphalism

https://joeldueck.com/oomphalism.html
1•velcrovan•10m ago•0 comments

Uutils Coreutils CVEs

https://seclists.org/oss-sec/2026/q2/332
1•_____k•10m ago•0 comments

Load balancing usage across Codex accounts

https://pepsipu.com/blog/2026-04-agent-scheduling/
1•pepsipu•11m ago•0 comments

1Mbet

https://1millionbet.com/
1•sergnowaday•14m ago•0 comments

Drop a Pin, Get a Link

https://addypin.com/
1•avoidaccess•14m ago•0 comments

Ask HN: Why is sharing private static HTML with non-engineers still hard?

1•nate•15m ago•0 comments

How Russia Is Luring Africans to Ukraine

https://www.nytimes.com/2026/05/04/world/africa/ukraine-russia-war-african-soldiers.html
3•loandbehold•16m ago•0 comments

Making Fuel from Thin Air: The Magical Methane Machine

https://www.corememory.com/p/the-magical-methane-machine-casey-handmer-terraform
2•metadat•17m ago•0 comments

Future of Work with AI Agents

https://futureofwork.saltlab.stanford.edu/
1•iceboundrock•18m ago•0 comments

Young Men Are Going to Extremes to Feel Like They Measure Up

https://www.wsj.com/health/wellness/young-men-are-going-to-extremes-to-feel-like-they-measure-up-...
1•Cider9986•18m ago•0 comments

My new hobby: Asking LLMs to generate ASCII Hamsters

https://internetexception.com/2026/05/04/my-new-hobby-asking-llms-to-generate-ascii-hamsters/
1•npodbielski•18m ago•0 comments

Tailoring AI solutions for health care needs

https://www.technologyreview.com/2026/05/04/1134425/tailoring-ai-solutions-for-health-care-needs/
1•joozio•19m ago•0 comments

macOS port of Notepad++ called out for trademark violation

https://www.theregister.com/2026/05/04/notepad_dev_demands_unofficial_macos/
2•speckx•19m ago•0 comments

Tesla reaches 10B FSD miles – is there's a magical milestone for autonomy

https://electrek.co/2026/05/03/tesla-fsd-10-billion-miles-no-magical-milestone-autonomy/
1•Brajeshwar•20m ago•0 comments

The Visible Zorker: Zork 3

https://eblong.com/infocom/visi/zork3/
1•zarlez•21m ago•0 comments

Show HN: Retrodex – Retro game collection tracker and game encyclopedia

https://retrodex.games
3•addamh•24m ago•1 comments

What is the whole point of writing

https://rebeccatoh.pika.page/posts/2026-04-30-what-is-the-whole
1•speckx•24m ago•1 comments

Show HN: Writer – fast, lightweight and open source markdown editor

https://writer.computer
1•nirvsoner•26m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•1y ago

Comments

kzawpl•1y ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•1y ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/