frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MailScrub – terminal UI for bulk Gmail unsubscribing

https://github.com/brooksc/MailScrub
1•brooksc•5m ago•0 comments

Search, Experience, Credence – classification of resources

https://en.wikipedia.org/wiki/SEC_classification_of_goods_and_services
1•downboots•8m ago•0 comments

Roast my game: Photobomb mobile multiplayer party game

https://www.photobomb.online/
1•alhwyn•11m ago•1 comments

God's View – Realtime BGP Looking Glass and IP Lookup

https://god.ad/
1•tgma•11m ago•0 comments

A classic Excel ad just got a 2026 upgrade [video]

https://www.youtube.com/watch?v=iEVx2ylAbI4
1•xtrkil•13m ago•0 comments

Meta will record employees' keystrokes and use it to train its AI models

https://techcrunch.com/2026/04/21/meta-will-record-employees-keystrokes-and-use-it-to-train-its-a...
1•jomon003•14m ago•0 comments

Article

https://mag.openrockets.com/p/developmental-integrity-di-and-the-cognitive-environments-why-minor...
2•openrockets•14m ago•0 comments

Show HN: Real-Real-Time Chat

https://kraa.io/kraa/trees
1•levmiseri•17m ago•0 comments

Building agents that reach production systems with MCP

https://claude.com/blog/building-agents-that-reach-production-systems-with-mcp
1•armcat•26m ago•0 comments

Anthropic: No "kill switch" for AI in classified settings

https://www.axios.com/2026/04/22/anthropic-no-kill-switch-ai-classified-settings
2•dsavant•27m ago•1 comments

America's descent into state capitalism is exaggerated

https://www.economist.com/business/2026/04/22/americas-descent-into-state-capitalism-is-exaggerated
2•andsoitis•32m ago•1 comments

It's time to reclaim the word "Palantir" for JRR Tolkien

https://www.zig.art/p/its-time-to-reclaim-the-word-palantir
2•IdahoSpring•34m ago•2 comments

Google upgrades AI Mode in the Chrome browser

https://blog.google/products-and-platforms/products/search/ai-mode-chrome/
1•gmays•34m ago•0 comments

Why This Car Rental Company's Stock Climbed 700% in One Month

https://www.forbes.com/sites/aliciapark/2026/04/22/a-car-rental-stock-is-up-700-in-one-month-is-i...
3•paulpauper•37m ago•0 comments

Congress pushes new semiconductor export control law

https://www.tomshardware.com/tech-industry/semiconductors/congress-moves-to-strip-commerce-of-chi...
2•jackyli02•41m ago•0 comments

Bash-ships: A Bash implementation of the classic strategy game Battleships

https://github.com/StarShovel/bash-ships
1•thunderbong•48m ago•0 comments

Show HN: Better-skills – Agent skill manager with profiles and versioning

https://github.com/ocherry341/better-skills
1•ocherry6622•49m ago•0 comments

Tasteful Tokenmaxxing

https://www.latent.space/p/ainews-tasteful-tokenmaxxing
2•omer_k•54m ago•0 comments

Arti: a Rust Tor Implementation – no longer experimental and ready for use

https://arti.torproject.org
2•acheong08•58m ago•0 comments

Why Iran Metabolizes the Pressure That Broke Venezuela

https://warontherocks.com/why-iran-metabolizes-the-pressure-that-broke-venezuela/
1•KnuthIsGod•1h ago•0 comments

Orinoco: Young Generation Garbage Collection

https://v8.dev/blog/orinoco-parallel-scavenger
2•plow-tycoon•1h ago•0 comments

Rspack 2.0

https://rspack.rs/blog/announcing-2-0
1•bpierre•1h ago•0 comments

Linux may get a hall pass from one state age bill, Congress plays hall monitor

https://www.theregister.com/2026/04/22/linux_us_state_age_verificaiton_laws/
1•Bender•1h ago•0 comments

Lisp Chat: An anonymous chat IRC-like written in Common Lisp

https://github.com/ryukinix/lisp-chat
1•lerax•1h ago•1 comments

OCUDU ecosystem foundation to accelerate open source AI-RAN innovation

https://www.linuxfoundation.org/press/linux-foundation-announces-ocudu-ecosystem-foundation-to-ac...
1•teleforce•1h ago•0 comments

Iran claims US used backdoors to knock out networking equipment during war

https://www.theregister.com/2026/04/21/iran_claims_us_used_backdoors/
1•Bender•1h ago•1 comments

A Practical Introduction to Constraint Programming Using CP-SAT and Python

https://pganalyze.com/blog/a-practical-introduction-to-constraint-programming-using-cp-sat
1•acheong08•1h ago•0 comments

Show HN: Cartoon Studio – an open-source desktop app for making 2D cartoon shows

https://github.com/Jellypod-Inc/cartoon-studio
3•bilater•1h ago•0 comments

Amazon is regretting AI [video][8 mins]

https://www.youtube.com/watch?v=0vvVo0Um1HY
2•Bender•1h ago•0 comments

Starbucks expansion in Nashville brews bitterness in Seattle

https://www.seattletimes.com/business/starbucks/starbucks-expansion-in-nashville-brews-bitterness...
1•RickJWagner•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/