frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A mayor simulator where corruption is gameplay

https://store.steampowered.com/app/3834340/Mayor_Life_Simulator/
1•playbgames•15s ago•0 comments

Adafruit – Our First Gemini Deep Think LLM-Assisted Hardware Design

https://blog.adafruit.com/2026/02/14/heres-our-first-gemini-deep-think-llm-assisted-hardware-design/
1•rwmcfa1•45s ago•0 comments

Show HN: R.A.T.A. v3 – Sub-Atomic Information Density on Solana"

https://rata-said.vercel.app/
1•Codedwaves•1m ago•0 comments

I made Bad Apple run in 5,258 microfrontends

https://twitter.com/nstlopez/status/2023066453029917027
1•Nsttt•4m ago•0 comments

Show HN: ShadowStrike – building an open-source EDR from scratch

https://github.com/Soocile/ShadowStrike
1•Soocile•8m ago•0 comments

Show HN: Noctaploy. A Postgres-first managed platform (public beta)

1•antoniodipinto•8m ago•0 comments

Hideki Sato, designer of all Sega's consoles, has died

https://www.videogameschronicle.com/news/hideki-sato-designer-of-segas-consoles-dies-age-75/
5•magoghm•9m ago•0 comments

Tell HN: OpenAI has been silently routing GPT-5.3-Codex requests to GPT-5.2

https://github.com/openai/codex/issues/11561
2•prodigycorp•12m ago•1 comments

America's Future Is African

https://www.liberalcurrents.com/future-is-african/
1•sandbach•13m ago•0 comments

Unsinkable Tubes Could Help Harvest Energy from the Ocean

https://www.nytimes.com/2026/02/15/science/unsinkable-aluminum-tubes.html
1•Brajeshwar•13m ago•0 comments

Cloudflare turns websites into faster food for AI agents

https://www.theregister.com/2026/02/13/cloudflare_markdown_for_ai_crawlers/
2•Bender•14m ago•0 comments

AI Didn't Kill Creativity. It Killed Your Excuses

https://garryslist.org/posts/ai-didn-t-kill-creativity-it-killed-your-excuses
1•andsoitis•14m ago•1 comments

LLM-written short story about being a LLM

https://twitter.com/jamesjyu/status/2022926490619248883
1•johnboiles•14m ago•0 comments

Tell HN: Google AI Studio docs encourage Google-discoverable open wallets

https://github.com/qudent/qudent.github.io/blob/master/_posts/2026-01-16-aistudio-proxy.md
1•qudent•15m ago•1 comments

US is moving ahead with colocated nukes and datacenters

https://www.theregister.com/2026/02/13/us_moving_ahead_with_colocated/
2•Bender•15m ago•0 comments

Amazon-backed X-Energy gets green light for mini reactor fuel production

https://www.theregister.com/2026/02/14/x_energy_smr_fuel/
1•Bender•16m ago•0 comments

Sex toys maker Tenga says hacker stole customer information

https://techcrunch.com/2026/02/13/sex-toys-maker-tenga-says-hacker-stole-customer-information/
3•SilverElfin•17m ago•0 comments

We're on the Voyage of the Damned

https://www.nytimes.com/2026/02/14/opinion/welcome-to-the-voyage-of-the-damned.html
1•jamesgill•17m ago•0 comments

Generative and Agentic AI Shift Concern from Tech Debt to Cognitive Debt

https://margaretstorey.com/blog/2026/02/09/cognitive-debt/
1•cratermoon•19m ago•0 comments

The silver fox domestication experiment [pdf]

https://link.springer.com/article/10.1186/s12052-018-0090-x
1•thunderbong•20m ago•0 comments

Ask HN: Crazy to Pivot PM into Engineering in '26?

1•ediblelegible•21m ago•2 comments

PersonaPlex-7B: full-duplex voice model that listens and talks at the same time

https://huggingface.co/nvidia/personaplex-7b-v1
1•MrBuddyCasino•24m ago•0 comments

Wall Street could seize your retirement savings in the next financial crash

https://www.foxnews.com/opinion/wall-street-could-seize-your-retirement-savings-next-financial-cr...
5•newsoftheday•25m ago•1 comments

Canada Has a Secessionist Movement on Its Hands. Its Supporters Thank Trump

https://www.wsj.com/world/americas/alberta-canada-independence-7549e240
2•Teever•29m ago•0 comments

WTF Happened in 2012?

https://wtfhappened2012.com/
2•RandomDailyUrls•30m ago•0 comments

The Enfield Thunderbolt: An electric car before its time (2013)

https://www.bbc.com/news/magazine-25117784
2•andsoitis•30m ago•0 comments

Why AI Agents Cannot Verify Email Addresses

https://app.writtte.com/read/gWP8dTq
1•lasgawe•31m ago•0 comments

An Enslaved Gardener Transformed the Pecan into a Cash Crop

https://lithub.com/how-an-enslaved-gardener-transformed-the-pecan-into-a-cash-crop/
16•PaulHoule•36m ago•1 comments

The Surprising Maths of Countdown, Britain's Oldest Game Show [video]

https://www.youtube.com/watch?v=X-7Wev90lw4
2•Timothee•39m ago•0 comments

Flaky Tests Are Not a Testing Problem. They're a Feedback Loop You Broke

1•microseyuyu•41m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•9mo ago

Comments

kzawpl•9mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•9mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/