frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Everyone in Seattle Hates AI

https://jonready.com/blog/posts/everyone-in-seattle-hates-ai.html
2•mips_avatar•2m ago•0 comments

The Future of EU eCall: Why Automotive Crash Reporting Is About to Get Smarter

https://www.smarteye.se/blog/euro-ncap-2026-ecall-crash-reporting/
1•walterbell•2m ago•0 comments

A sick tool for visualizing graphs built with DuckDB WASM and cosmos.gl

https://cosmograph.app
1•ernaem•3m ago•0 comments

Heavy Truck Sales Collapsed in October and November. Recession Indicator

https://www.calculatedriskblog.com/2025/12/heavy-truck-sales-collapsed-in-october.html
1•speckx•3m ago•0 comments

My favorite iPhone ever is now officially obsolete

https://www.neowin.net/news/my-favorite-iphone-ever-is-now-officially-obsolete/
2•walterbell•7m ago•0 comments

Basecamp to Phase Out Once Business Model

https://twitter.com/jasonfried/status/1996245550006497591
1•weaksauce•11m ago•0 comments

Show HN: Free French Health Directory – Find Doctors and Medical Facilities

https://www.mavillesante.fr/
1•toutoulliou•13m ago•1 comments

Elon Musk is on a racist posting spree again

https://www.theverge.com/news/837423/elon-musk-x-racist-posts-minnesota
8•latexr•13m ago•0 comments

Show HN: I analyzed and visualized 5k near-death and out of body experiences

https://www.noeticmap.com/
1•mikias•15m ago•0 comments

How Attention Got So Efficient [GQA/MLA/DSA] [video]

https://www.youtube.com/watch?v=Y-o545eYjXM
1•meffmadd•19m ago•0 comments

macOS default resource class updated to m4pro.medium

https://circleci.com/changelog/macos-default-resource-class-updated-to-m4pro-medium/
1•tqpcharlie•19m ago•0 comments

Women's Institute to Stop Offering Trans Women Membership

https://www.bbc.com/news/articles/c3e05130wyno
2•RickJWagner•20m ago•1 comments

DRAM prices may stay high past 2028 as Samsung, SK Hynix curb oversupply

https://www.pcgamer.com/hardware/memory/memory-crisis-and-sky-high-dram-prices-could-run-past-202...
4•haunter•21m ago•1 comments

Show HN: Patternia – A Zero-Overhead Pattern Matching DSL for Modern C++

https://github.com/sentomk/patternia
1•sentomk•22m ago•0 comments

Library of Time

https://libraryoftime.xyz/
1•bookofjoe•25m ago•0 comments

What Are Lie Groups?

https://www.quantamagazine.org/what-are-lie-groups-20251203/
6•ibobev•27m ago•0 comments

Diocletian's Cabbages – The Abdication of an Emperor

https://historyhogs.com/diocletians-cabbages/
1•dsego•27m ago•0 comments

You Are (Probably) Measuring Time Wrong

https://www.counting-stuff.com/why-you-are-probably-measuring-time-wrong-why-do-we-need-to-use-su...
1•speckx•27m ago•0 comments

Minimising Screen Brightness with Ubuntu

https://blog.georgovassilis.com/2025/12/03/minimising-screen-brightness-in-ubuntu/
1•ggeorgovassilis•28m ago•0 comments

Using ClickHouse for L7 DDoS and Bot Traffic Analytics with Tempesta FW

https://tempesta-tech.com/blog/defending-against-l7-ddos-and-web-bots-with-tempesta-fw/
1•krizhanovsky•29m ago•1 comments

Essentials of Compilation: An Incremental Approach (2020)

https://swatson555.github.io/essentials-of-compilation-support/
1•swatson741•29m ago•0 comments

The Market as God (1999)

https://www.theatlantic.com/magazine/archive/1999/03/the-market-as-god/306397/
1•measurablefunc•30m ago•1 comments

Man charged with theft over claims he swallowed $19k Fabergé egg

https://www.bbc.com/news/articles/c7vm754r80vo
1•onemoresoop•34m ago•0 comments

The Invisible Cost: From Creator to Consumer

https://edwardnoaland.substack.com/p/the-invisible-cost-from-creator-to
2•edwardnoaland•35m ago•0 comments

Vanbi

https://xeiaso.net/blog/vanbi-01-08-2019/
3•xena•35m ago•0 comments

Conflict with fathers and friends speeds up aging

https://news.virginia.edu/content/conflict-fathers-and-friends-speeds-aging
1•PaulHoule•36m ago•0 comments

Search for long-missing Malaysia Airlines flight MH370 to resume

https://www.bbc.com/news/articles/cy7v077dm0po
1•onemoresoop•36m ago•0 comments

Random Gods song in ORCA (2D programming language) [video]

https://www.youtube.com/watch?v=mxr8Dtw2R5w
2•ludicrousdispla•37m ago•1 comments

The Value Story Framework (2020)

https://www.reifyworks.com/writing/2020-10-14-introducing-the-vsf
1•mooreds•39m ago•0 comments

The only winning move is not to play

https://gregg.io/the-only-winning-move
3•AIBytes•39m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/