frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Fiber optic cables can eavesdrop on nearby conversations

https://www.science.org/content/article/fiber-optic-cables-can-eavesdrop-nearby-conversations
1•signa11•1m ago•0 comments

Top LLMs Have a Podcast Together [video]

https://www.youtube.com/watch?v=9qqmaYRI7Qw
1•modinfo•4m ago•0 comments

Killswitch: Per-function short-circuit mitigation primitive

https://lwn.net/ml/all/20260507070547.2268452-1-sashal@kernel.org/
1•signa11•5m ago•0 comments

A Tale of Two Artisans

https://koas.dev/a-tale-of-two-artisans/
1•alvaro_calleja•13m ago•1 comments

Anesthetic Risk Linked to Venezuelan Maternal Lineage

https://www.medscape.com/viewarticle/anesthetic-risk-linked-venezuelan-maternal-lineage-2026a10009ni
1•fodmap•17m ago•0 comments

The Tech Reclaimers: A Community Bicycle Repair Club for the Internet

https://www.techreclaimers.club
2•jonasced•19m ago•0 comments

What if new proofs are included in LLM training so LLM rediscover it?

2•folderquestion•20m ago•0 comments

Essential Capabilities Insight Teams Need in a Modern Market Research Platform

https://figshare.com/articles/journal_contribution/_b_7_Essential_Capabilities_Insight_Teams_Need...
1•anasteciadunu•30m ago•0 comments

I built godom: Go owns the DOM and the browser is just a rendering surface

https://www.anupshinde.com/why-i-built-godom/
1•anupshinde•30m ago•0 comments

LLMs Corrupt Your Documents When You Delegate

https://arxiv.org/abs/2604.15597
2•rbanffy•35m ago•0 comments

Was Back‑to‑Office Enforced?

1•xchip•35m ago•0 comments

Show HN: Hum – ad-free terminal music player (Rust, no API keys)

https://github.com/Devendra116/hum/
1•devendra116•38m ago•0 comments

Closure of Radio 4 on Long Wave

https://www.bbc.co.uk/reception/work-warning/news/radio4lw
2•fredley•38m ago•0 comments

I've replaced my Claude subscription with a sleep control app

https://twitter.com/patoroco/status/2053031292594225641
2•patoroco•41m ago•0 comments

I returned to AWS, and was reminded why I left

http://fourlightyears.blogspot.com/2026/05/i-returned-to-aws-and-was-reminded-hard.html
2•andrewstuart•42m ago•1 comments

Big Tech's $725B AI spending spree sends free cash flow to a decade low

https://www.ft.com/content/b3dfaba9-17a2-4fac-90fe-4ab3ca7c9494
5•1vuio0pswjnm7•43m ago•0 comments

Meta is dying. It's about time

https://www.nytimes.com/2026/05/08/opinion/meta-facebook-zuckerberg.html
7•LucidLynx•48m ago•2 comments

Hacktoberfest 2025

https://hacktoberfest.com
1•Bikash755043•49m ago•0 comments

Impossible Assumptions

https://blog.jakobschwichtenberg.com/p/impossible-assumptions
1•unknown1111•50m ago•0 comments

Cloudflare Stock Tumbles. An Earnings Beat Wasn't Enough

https://www.barrons.com/articles/cloudfare-earnings-stock-price-be96c90f
3•1vuio0pswjnm7•51m ago•0 comments

Counting Fast in Erlang with:counters and:atomics

https://andrealeopardi.com/posts/erlang-counters-and-atomics/
1•malmz•52m ago•0 comments

Free Gpt.im

https://freegpt.im
2•Evan23345•53m ago•0 comments

International cyber attack disrupts swathe of universities and schools

https://www.bbc.com/news/articles/ce3pq0136eqo
2•1vuio0pswjnm7•54m ago•0 comments

A Man Who Almost Never Succeeded (2012)

https://www.lensrentals.com/blog/2012/10/the-man-who-almost-never-succeeded/
1•downbad_•56m ago•1 comments

Help Needed Seeking Contributors for a Pure C Compiler and Runtime

https://github.com/heikowagner/nela-lang/issues/1
1•heikowag•57m ago•1 comments

Simplifying camera trap image analysis with AI

https://addaxdatascience.com/addaxai/
2•bryanrasmussen•59m ago•1 comments

Yesterday I had some news that has left me feeling

https://mylightstillshines.wordpress.com/2026/05/09/yesterday-i-had-some-news-that-has-left-me-fe...
1•jaygirl•1h ago•0 comments

Show HN: I Built a Retro Survival RPG in Vanilla JavaScript

2•jasonkester•1h ago•0 comments

Astroberry – OS for controlling astronomy equipment

https://astroberry.io/
1•NKosmatos•1h ago•0 comments

Show HN: Digits – Encrypted calls from gutted vintage desk phones

https://digits.family
1•justinlindh•1h ago•3 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•1y ago

Comments

kzawpl•1y ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•1y ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/