frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Andreessen, Thrive Poised for Windfall from SpaceX's Cursor Bid

https://www.bloomberg.com/news/articles/2026-04-22/andreessen-thrive-poised-for-windfall-from-spa...
1•petethomas•17s ago•0 comments

A new programming model for durable execution

https://vercel.com/blog/a-new-programming-model-for-durable-execution
1•gmays•19s ago•0 comments

Show HN: Archon-memory-core – agent memory that resolves contradictions

https://divergencerouter.com/amc/
1•Divergence42•3m ago•0 comments

Scaling Test-Time Compute for Agentic Coding

https://arxiv.org/abs/2604.16529
1•matt_d•3m ago•0 comments

Opus 4.7 is having a rough day. double check its work

https://imgur.com/a/eg5zL1u
1•prallo•4m ago•0 comments

LemmaScript: A Verification Toolchain for TypeScript via Dafny

https://midspiral.com/blog/lemmascript-a-verification-toolchain-for-typescript/
1•namin•5m ago•0 comments

Six-year-old girl has sight restored by gene therapy

https://news.sky.com/story/parents-hail-incredible-results-after-six-year-old-girl-has-sight-rest...
3•austinallegro•8m ago•0 comments

What 81000 Claude users said about the economics of AI

https://www.anthropic.com/research/81k-economics
1•schyzomaniac•8m ago•1 comments

Claude Code for the Outer Loop: An AI SRE Playbook

https://www.arcade.dev/blog/claude-code-ai-sre-oncall-workflows/
1•manveerc•10m ago•0 comments

Show HN: A visual CSS editor, Mac native

https://bendansby.com/cest/
1•webwielder2•13m ago•0 comments

生き甲斐 (ikigai) “a reason for being”

https://en.wikipedia.org/wiki/Ikigai
1•guessmyname•15m ago•0 comments

Ping-Pong Robot Stuns World by Defeating Elite Human Players [video]

https://www.youtube.com/watch?v=lWp6XNHaWRk
1•mgh2•16m ago•0 comments

I'm Using Claude Code for Everything Else but Coding

https://chandlernguyen.com/blog/2026/04/22/im-using-claude-code-for-everything-else-but-coding/
1•chandlernguyen•17m ago•1 comments

We built a multi-agent app on Genkit and Firebase

https://www.conveen.ai/building-with-genkit-and-firebase
2•ruby-kandah•18m ago•0 comments

Tom Lehrer (1928–2025): A (Mostly) Mathematical Appreciation [pdf]

https://www.ams.org/journals/notices/202602/rnoti-p118.pdf
2•ganitam•19m ago•1 comments

There's Another Reason Gen Z Can't Find Work

https://www.nytimes.com/2026/04/22/opinion/gen-z-job-ladder.html
1•doener•20m ago•0 comments

Proximal Policy Optimization with Clojure and PyTorch

https://clojurecivitas.org/ppo/main.html
1•wedesoft•20m ago•1 comments

Oxford Calculators

https://en.wikipedia.org/wiki/Oxford_Calculators
2•danielam•20m ago•0 comments

Apple Is Boring Now

https://www.theatlantic.com/ideas/2026/04/tim-cook-ternus-apple/686893/
1•paulpauper•21m ago•0 comments

A 'Barbaric' Problem in American Hospitals Is Only Getting Bigger

https://www.theatlantic.com/health/2026/04/emergency-department-boarding-crisis/686765/
1•paulpauper•21m ago•0 comments

Tensor Algebra to Represent and Accelerate RTL Simulation

https://arxiv.org/abs/2601.18140
2•sha_rad•22m ago•0 comments

Notes from a Marketer Building a Real CLI with Codex

https://lindsaybrunner.com/thoughts/2026-04-11/building-a-cli-with-ai/
1•mooreds•26m ago•0 comments

Show HN: RedAI – AI-driven vulnerability discovery and live validation

https://github.com/kpolley/redai
1•kpolls•28m ago•0 comments

Bun 1.1.13 out with memory fixes as dev complain of leaks

https://www.theregister.com/2026/04/21/anthropics_bun_1113_released_with_memory_fixes/
1•birdculture•29m ago•0 comments

The AI Power Bottleneck: Data Centers Meet the Steel Monopoly

https://blog.adafruit.com/2026/04/22/the-ai-power-bottleneck-data-centers-meet-the-steel-monopoly/
2•zdw•32m ago•0 comments

If This Road

https://ifthisroad.com/
2•Oarch•33m ago•0 comments

Tim Cook Regrets Maps Flub, Sees Apple Watch as His Proudest Work

https://www.bloomberg.com/news/articles/2026-04-22/tim-cook-regrets-maps-flub-sees-apple-watch-as...
2•amrrs•35m ago•0 comments

Polymarket weather bet manipulated with a hairdryer

https://twitter.com/aaronjmars/status/2047017251270734309
4•dnw•35m ago•0 comments

Show HN: Markdown editor with Obsidian-style inline live preview

https://kenforthewin.github.io/atomic-editor/
1•kenforthewin•35m ago•1 comments

How we think about truth, verification, and "time to first trust" at Webhound

https://www.webhound.ai/news/time-to-first-trust
1•mfkhalil•37m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/