frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Genkit Middleware: Intercept, extend, and harden your agentic apps Blog

https://developers.googleblog.com/announcing-genkit-middleware-intercept-extend-and-harden-your-a...
1•shallow-mind•1m ago•0 comments

Programmable Phones

https://tailrecursion.com/~alan/ProgrammablePhones.html
1•wooby•4m ago•0 comments

The rise and fall of an AI-driven 'local news outlet' in South Florida

https://floridatrib.org/2026/05/14/the-rise-and-fall-of-an-ai-driven-local-news-outlet-in-south-f...
1•martey•6m ago•0 comments

What Are AI Ethics

https://krellixlabs.com/en/blog/what-are-ai-ethics
1•radu_me•7m ago•0 comments

Ask HN: One mistake or hack that taught you the most?

1•SyntaxErrorist•8m ago•2 comments

Agentic evals or LLM as a judge? considering cost, time and quality

1•pipelineofone•11m ago•0 comments

Lookup.disclose.io – find the right security contact for any asset

https://lookup.disclose.io
2•caseyjohnellis•12m ago•1 comments

Agentic SDLC: How OpenSearch accelerates engineering with its own engine

https://opensearch.org/blog/harness-first-agentic-sdlc-how-opensearch-builds-software-using-its-o...
1•Lunar5227•16m ago•0 comments

Show HN: QUptime, quorum based decentralized uptime tool

https://github.com/Axodouble/QUptime
1•Axodouble•19m ago•0 comments

Multi-LLM trading harness with live leaderboard on Alpaca paper trades

https://github.com/achaljhawar/1rok
1•satoshiclad•26m ago•0 comments

What Are the Different Types of AI Testing Tools?

1•allenmatthew•37m ago•0 comments

The Power of the Breath

https://medicine.yale.edu/news-article/the-power-of-the-breath/
3•andsoitis•43m ago•2 comments

Social Media Bans Are for Kids. What About Adults?

https://pmz1.substack.com/p/social-media-bans-are-for-kids-what
3•gieksosz•47m ago•1 comments

How climate-resilient homes in India are reducing dependence on air conditioners

https://www.thehindu.com/sci-tech/energy-and-environment/how-climate-resilient-homes-in-india-are...
2•rustoo•54m ago•0 comments

OpenAI just lost its enterprise AI crown to Anthropic

https://www.businessinsider.com/anthropic-tops-openai-business-ai-adoption-ramp-index-2026-5
3•mazokum•55m ago•0 comments

Edith Eger, Auschwitz Survivor Who Helped Others Cope with Trauma, Dies at 98

https://www.wsj.com/world/edith-eva-eger-dead-13268534
3•hodgesrm•1h ago•0 comments

Britain investigates Microsoft over business software dominance

https://www.reuters.com/legal/litigation/uk-opens-antitrust-probe-into-microsofts-business-softwa...
3•frm88•1h ago•1 comments

Random AI Explained Fast

https://www.youtube.com/watch?v=XURpiqSelBw
2•KornClown7•1h ago•0 comments

Perfect Number Bomb(2025)

https://www.quantumcalculus.org/odd-perfect-number-bomb/
2•nill0•1h ago•0 comments

FilePilot AI – local-first desktop file manager with optional AI summaries

https://github.com/cuiheng511/filepilot-ai
2•cui511511•1h ago•0 comments

How the Ingredients of Life Make Our Journey Worthwhile

https://medium.com/create-your-career/how-the-ingredients-of-life-make-our-journey-worthwhile-12e...
1•andsoitis•1h ago•0 comments

US reportedly dropped fraud charges against Adani after he hired Trump's lawyer

https://www.theguardian.com/us-news/2026/may/14/gautam-adani-billionaire-trump
2•dilawar•1h ago•0 comments

Logic bug in the Linux kernel's __ptrace_may_access() function (LPE)

https://www.openwall.com/lists/oss-security/2026/05/15/2
2•Tiberium•1h ago•0 comments

Ask HN: How do you catch regressions when you change your AI agent's prompt?

1•yakshithk_•1h ago•0 comments

C++26 Shipped a SIMD Library Nobody Asked For

https://lucisqr.substack.com/p/c26-shipped-a-simd-library-nobody
2•signa11•1h ago•0 comments

Solar Is Everything

https://www.barchart.com/story/news/37361552/solar-is-everything-teslas-elon-musk-says-other-ener...
2•andsoitis•1h ago•0 comments

Claude free usage limits are nuts. useless

4•paulpauper•1h ago•0 comments

PSVL 1.0 – The most comprehensive source-visible license (276 clauses)

https://github.com/BMBOMICH/PSVL
1•BMBOMICH•1h ago•0 comments

How Claude Code works in large codebases

https://claude.com/blog/how-claude-code-works-in-large-codebases-best-practices-and-where-to-start
76•shenli3514•1h ago•45 comments

Natural rhythm of sleep compared to modern social norms

https://dylan.gr/1775146616
4•James72689•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•1y ago

Comments

kzawpl•1y ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•1y ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/