frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NextCell – a portable spreadsheet editor inspired by Excel 97

https://redata.dev/nextcell/
1•Suliman123•1m ago•1 comments

Show HN: TUI for SVN

https://lazysvn.sawirstudio.com/
1•sawirricardo•4m ago•0 comments

Agentic Risks

https://cloudberry.engineering/article/agentic-risks/
1•gbrindisi•5m ago•0 comments

Ask HN: Why does a black line appear on HN sometimes?

1•bheadmaster•11m ago•0 comments

Ask HN: How do you find joy in a world full of depressing news?

2•Razengan•13m ago•1 comments

Gpsjam GPS/GNSS Interference Map

https://gpsjam.org/
1•jonbaer•17m ago•0 comments

The Quantum Curtain

https://www.defenseone.com/ideas/2026/03/quantum-curtain/411967/
2•jonbaer•20m ago•0 comments

Stacksort

https://gkoberger.github.io/stacksort/
1•mihau•21m ago•0 comments

Mesh – remote mobile forensics and network monitoring

https://github.com/BARGHEST-ngo/MESH
1•0x0v1•21m ago•1 comments

MacBook Neo Review: Better Than You Think

https://www.youtube.com/watch?v=iGeXGdYE7UE
1•keepamovin•22m ago•0 comments

Encode/httpx: Closing off access

https://github.com/encode/httpx/discussions/3784
2•luismedel•22m ago•0 comments

A Kubernetes operator that orchestrates AI coding agents

https://medium.com/@bobbydeveaux/we-built-an-ai-that-plans-codes-reviews-and-ships-and-then-we-us...
2•bobbydeveaux•24m ago•1 comments

AI Agent Hacks McKinsey

https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform
1•mycroft_4221•25m ago•0 comments

Movies I Highly Recommend

https://github.com/ojhaugen15/12_movies
1•programmexxx•27m ago•0 comments

Richard Feynman's story illustrating the problem of p-hacking

https://twitter.com/SwipeWright/status/2031604331510690112
5•MrBuddyCasino•35m ago•0 comments

Glanceway – Collect RSS and custom plugin data in your macOS menu bar

https://glanceway.app
1•codytseng•35m ago•1 comments

Unbash: Fast 0-deps bash parser written in TypeScript

https://github.com/webpro-nl/unbash
1•mariuz•37m ago•0 comments

Ask HN: Is there a market for a security-audited Claude Code skills newsletter?

1•camicortazar•37m ago•0 comments

The Anthropic Institute

https://www.anthropic.com/news/the-anthropic-institute
4•meetpateltech•37m ago•1 comments

Gemini 2 Is the Top Model for Embeddings

https://agentset.ai/blog/gemini-2-embedding
2•tifa2up•42m ago•0 comments

Tutorials in Optomechanics

https://wp.optics.arizona.edu/optomech/tutorials-in-optomechanics/
1•o4c•43m ago•0 comments

A.I. Incites a New Wave of Grieving Parents Fighting for Online Safety

https://www.nytimes.com/2026/03/10/technology/ai-social-media-child-safety-parents.html
3•1vuio0pswjnm7•48m ago•1 comments

The Ig Nobel Prize Ceremony Is Moving to Europe (After 35 Years in the USA)

https://improbable.com/2026/03/10/the-ig-nobel-prize-ceremony-is-moving-to-europe-after-35-years-...
3•layer8•50m ago•0 comments

Some Arabic Words Transliterated

https://docs.google.com/document/d/1RMxjUr2Rki6TLNTNd00BNtBUwB0DJXiE4Dd_YppUi1I/edit
1•programmexxx•52m ago•0 comments

Google to Provide Pentagon with AI Agents

https://www.bloomberg.com/news/articles/2026-03-10/google-to-provide-pentagon-with-ai-agents-for-...
12•1vuio0pswjnm7•53m ago•3 comments

Europe tops global arms imports, SIPRI reports

https://www.dw.com/en/sipri-europe-arms-imports-global-weapons-trade-defense-spending/a-76261906
1•breve•58m ago•0 comments

AI-powered apps struggle with long-term retention, new report shows

https://techcrunch.com/2026/03/10/ai-powered-apps-struggle-with-long-term-retention-new-report-sh...
2•pseudolus•1h ago•0 comments

My app got 3k users in 48 hours and then monetization almost killed it

https://getcalendarly.com
1•DimKat•1h ago•1 comments

PEP 827 – Type Manipulation

https://peps.python.org/pep-0827/
2•EvgeniyZh•1h ago•0 comments

NASA's Van Allen Probe A to re-enter atmosphere

https://phys.org/news/2026-03-nasa-van-allen-probe-atmosphere.html
7•bookmtn•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•10mo ago

Comments

kzawpl•10mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•10mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/