frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How Alive Can a Video Game Street Be?

https://www.youtube.com/watch?v=5wU4AB7Y2Kw
1•pippy360•37s ago•1 comments

Impact of 100% Adoption of AI Coding Agents by Non-Technical Team

https://www.kapwing.com/blog/how-we-achieved-100-adoption-of-ai-coding-agents/
1•jenthoven•56s ago•0 comments

Russia and China veto UN resolution aimed at reopening the Strait of Hormuz

https://apnews.com/article/un-iran-us-strait-hormuz-bahrain-resolution-640e644b57df5c762ed9c57ef8...
1•WaitWaitWha•58s ago•0 comments

ICE acknowledges it is using powerful spyware

https://text.npr.org/nx-s1-5776799
1•dnemmers•4m ago•0 comments

Law Students: AI Is Changing Things

https://maxglobalnews.com/the-future-of-lawyers-in-the-age-of-ai-%f0%9f%92%bb/
1•videobroker•8m ago•0 comments

Show HN: Kylrix- open source de-googled privacy suite for techies

1•nathfavour•10m ago•1 comments

The way every agent framework handles MCP is a latent security problem

2•An0n_Jon•18m ago•0 comments

Steergen: Single-source steering docs for spec-driven development

https://github.com/aabs/steergen
1•andrew_matthews•23m ago•0 comments

Voltage drops when powering a Raspberry Pi4B from a LM2596T regulator

https://electronics.stackexchange.com/questions/767760/voltage-drops-when-powering-a-raspberry-pi...
1•kristianp•24m ago•0 comments

I wrote maternity experience on my resume

https://story.cv/blog/articles/maternity-leave-on-resume
1•kavyaj•25m ago•1 comments

GitHub Copilot CLI now supports BYOK and local models

https://github.blog/changelog/2026-04-07-copilot-cli-now-supports-byok-and-local-models/
2•lemonish97•25m ago•1 comments

Odd Lots: This Is How to Tell If Writing Was Made by AI

https://www.bloomberg.com/news/audio/2026-04-02/odd-lots-how-to-tell-if-writing-was-made-by-ai-po...
1•KnuthIsGod•26m ago•0 comments

White Noise (encrypted group chat over Nostr) completes security audit

https://leastauthority.com/blog/audit-of-white-noise-whitenoise-rs/
1•iamnothere•28m ago•0 comments

Phishing Campaigns "I Paid Twice" Targeting Booking.com Hotels and Customers

https://blog.sekoia.io/phishing-campaigns-i-paid-twice-targeting-booking-com-hotels-and-customers/
1•quantisan•29m ago•0 comments

JSIR: A High-Level IR for JavaScript

https://discourse.llvm.org/t/rfc-jsir-a-high-level-ir-for-javascript/90456
1•nnx•31m ago•0 comments

Is my writing too wet?

https://samkriss.substack.com/p/is-my-writing-too-wet
1•cainxinth•33m ago•0 comments

Apple is pissing me off. (rant - Account Settings)

3•TechPlasma•35m ago•0 comments

Zayvora

https://ollama.com/daxini2404
1•dharamdaxini•41m ago•0 comments

Downdetector for Claudecode Vibes

https://claudedumb.com/
2•ymaws•42m ago•0 comments

Incremental Compilation with LLVM

https://ziglang.org/devlog/2026/#2026-04-08
2•mlugg•48m ago•0 comments

Goldman CIO Marco Argenti on the Warp-Speed Improvements in AI

https://podcasts.apple.com/us/podcast/goldman-cio-marco-argenti-on-the-warp-speed/id1056200096?i=...
1•KnuthIsGod•51m ago•0 comments

Longer wavelengths in sunlight pass through the human body and have a systemic

https://www.nature.com/articles/s41598-025-09785-3
2•bilsbie•52m ago•0 comments

Show HN: Can an AI model fit on a single pixel?

https://github.com/dvelton/ai-pixel
2•deevelton•57m ago•0 comments

Reflections on Vibe Researching

https://joshuagans.substack.com/p/reflections-on-vibe-researching
1•NomNew•1h ago•0 comments

The Signal Ledger: Turning technical reading into compounding research

https://blog.pjhoberman.com/build-a-knowledge-base-that-compounds
2•hookedonwinter•1h ago•0 comments

Naftiko Open-Source Spec-Driven Integration

https://github.com/naftiko/framework/
4•apievangelist•1h ago•1 comments

Nutshell Newsletter – Daily Customizable RSS Summarizer

https://www.nutshellnewsletter.com/
2•jhiggins777•1h ago•1 comments

Ask HN: Is there any tool that can stop LLM calls at runtime (not just monitor)?

3•8dazo•1h ago•0 comments

We built a tool for custom timeouts on your Unity Cloud builds

https://buildstash.com/blog/custom-timeouts-on-your-unity-cloud-builds
2•r0bbie•1h ago•1 comments

The Wealthy Investors That Powered Private Credit Are Rushing for the Exits

https://www.wsj.com/finance/investing/the-wealthy-investors-that-powered-private-credit-are-rushi...
3•toomuchtodo•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/