frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Stop Vibe Coding: When AI-Driven Development Backfires and What Works

https://ssebs.com/blog/vibe-coding-1/
1•ssebs•51s ago•1 comments

Vulnerabilities in Cloudflare's vinext disclosed by Vercel

https://twitter.com/rauchg/status/2026864132423823499
1•anematode•1m ago•0 comments

Writing Crystalized Thinking at Amazon. Is AI Muddying It?

https://www.bigtechnology.com/p/writing-crystalized-thinking-at-amazon
1•davidst•2m ago•0 comments

Bill Gates reportedly apologizes, admits to two affairs in candid town hall

https://www.cnbc.com/2026/02/25/bill-gates-epstein-files-affair.html
2•1vuio0pswjnm7•6m ago•0 comments

Undeleted XAA, making X up to >200x faster Accelerated Again

https://www.patreon.com/posts/undeleted-xaa-x-151028801
1•csmantle•9m ago•1 comments

Lyte2D: A comfy little game engine

https://lyte2d.com/lyte.html?zip=public/lyte-intro.zip
1•todsacerdoti•10m ago•0 comments

Are Glassholes Using Smart Glasses Near You? There's an App for That

https://gizmodo.com/want-to-know-if-glassholes-are-using-smart-glasses-near-you-theres-an-app-for...
1•laurex•11m ago•0 comments

A.D. Open-Source RTS Game Drops Alpha Label After 16 Years

https://linuxiac.com/0ad-open-source-game-drops-alpha-label-after-16-years/
1•WaitWaitWha•11m ago•1 comments

The happiest I've ever been

https://ben-mini.com/2026/the-happiest-ive-ever-been
2•bewal416•12m ago•0 comments

Canada and South Korea sign a defence agreement

https://www.cbc.ca/lite/story/9.7106354
1•colinprince•14m ago•0 comments

Bill Gate's Comes Clean

https://wabcradio.com/2026/02/25/bill-gates-comes-clean/
2•jhallenworld•14m ago•0 comments

SkillsBench: The First Benchmark for Agent Skills

https://www.skillsbench.ai/blogs/introducing-skillsbench
1•aratahikaru5•15m ago•0 comments

Show HN: Oh-My-OpenClaw – agent orchestration for coding, from Discord/Telegram

https://github.com/happycastle114/oh-my-openclaw
2•soungmin114•16m ago•0 comments

Show HN: Runtric – Turn any topic into a chapter-based learning path

https://runtric.com/
1•resetmerlin•16m ago•0 comments

Washington Post Losses Topped $100M in 2025

https://www.wsj.com/business/media/washington-post-losses-topped-100-million-in-2025-85076aae
3•mudil•19m ago•1 comments

Testing "Raw" GPU Cache Latency

https://clamtech.org/?dest=gpudirectlatency
1•mfiguiere•19m ago•0 comments

In 2100, 2 socio-economic classes exist

1•shoman3003•19m ago•0 comments

Anthropic and the Department of War

https://thezvi.wordpress.com/2026/02/25/anthropic-and-the-department-of-war/
2•acconrad•20m ago•0 comments

A Sorority Gave Our App a 2/10, So I Built an AI Version of Them

https://medium.com/@empadev64/a-sorority-gave-our-app-a-2-10-so-i-built-an-ai-version-of-them-it-...
1•anthony_kw•21m ago•0 comments

Show HN: DeltaMemory – Persistent cognitive memory for production AI agents

https://www.deltamemory.com/
1•bikidev•28m ago•1 comments

Gender markers are useless, so why not abolish them?

https://policyoptions.irpp.org/2021/11/gender-markers-are-useless-so-why-not-abolish-them/
3•KittenInABox•29m ago•1 comments

Show HN: Director-AI – token-level NLI+RAG

https://github.com/anulum/director-ai
1•anulum•31m ago•2 comments

LazyGravity – I made my phone control Antigravity so I never leave bed

2•masaTokyo•32m ago•1 comments

Ask HN: Books about Communication

1•soupfordummies•34m ago•0 comments

US role as global talent hub in doubt amid Donald Trump's visa crackdown

https://www.ft.com/content/c8114fd1-771b-49ac-98c3-a8acf6177626
3•johntfella•35m ago•0 comments

That's it. Bill Gates is DONE

https://www.youtube.com/watch?v=NZWT75CKZko
1•cable2600•35m ago•0 comments

The Intelligent OS: Making Al agents more helpful for Android apps

https://android-developers.googleblog.com/2026/02/the-intelligent-os-making-ai-agents.html
1•ming030890•37m ago•0 comments

Deep Learning Crash Course

https://github.com/DeepTrackAI/DeepLearningCrashCourse
1•teleforce•39m ago•0 comments

Test drive Linux distros online

https://distrosea.com/
1•goodmythical•41m ago•0 comments

What AI tools is everyone using now for GTM?

1•imwoody•42m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•9mo ago

Comments

kzawpl•9mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•9mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/