frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Can memory-hard PoW still meaningfully reduce ASIC/GPU advantage?

https://pastebin.support.one/view/aba95c0b
1•TheBlocksmith•1m ago•0 comments

Drone Swarms Packed into Unassuming Containers Sought by DARPA

https://www.twz.com/news-features/drone-swarms-packed-into-unassuming-containers-sought-by-darpa
1•breve•1m ago•0 comments

Yarbo's promise to fix the robot mower that ran me over

https://www.theverge.com/tech/926989/yarbo-robot-lawn-mower-hack-company-update-security-promise
2•gnabgib•2m ago•0 comments

Getting Arrested in Japan

https://sundaicity.com/blogs/getting-arrested-in-japan
1•bane•3m ago•0 comments

Show HN: Pitch Is Just Rhythm Sped Up [video]

https://www.youtube.com/watch?v=q9bFUocrm70
1•ersinesen•5m ago•0 comments

Matt Pietrek

https://en.wikipedia.org/wiki/Matt_Pietrek
1•stefan_•5m ago•0 comments

The Death of the Roadmap

https://debarshibasak.github.io/readables/blogs/death-of-roadmap.html
2•debarshri•7m ago•0 comments

Keats, Letters

https://sites.ualberta.ca/~dmiall/Tintern07/KeatsLet.htm
1•highfrequency•10m ago•0 comments

Rust but Lisp

https://github.com/ThatXliner/rust-but-lisp
2•thatxliner•13m ago•2 comments

Qwench is a terminal typing game for Linux, Windows, Mac. Built with Crossterm.

https://github.com/BitPusher16/qwench
1•carodgers•13m ago•1 comments

War.gov/UFO/ UFO file download reference repo

https://github.com/dopper/nts-ufos
1•dopper•15m ago•0 comments

London's BT Tower to get rooftop swimming pool

https://www.theregister.com/offbeat/2026/05/09/londons-bt-tower-to-get-rooftop-swimming-pool/5237337
1•samizdis•16m ago•0 comments

The 90 Day disclosure policy is dead

https://blog.himanshuanand.com/2026/05/the-90-day-disclosure-policy-is-dead/
2•unknownhad•16m ago•0 comments

Blog Post Tells the Time

https://alexsci.com/blog/this-blog-post-tells-the-time/
1•saeedesmaili•20m ago•0 comments

Show HN: Free OSS transcription app I made and found it's faster than wispr flow

https://mumbli.app/
2•fireharp•22m ago•0 comments

The Rise of Emotional Surveillance

https://www.theatlantic.com/culture/2026/05/worker-surveillance-emotion-ai/687029/
4•iugtmkbdfil834•26m ago•1 comments

Web Server on a Nintendo Wii

http://wii.sjmulder.nl/
1•adunk•27m ago•0 comments

Hugging Face's Clem Delangue: Stop Comparing Engines to Cars

https://www.turingpost.com/p/clem-delangue-hugging-face-ai-builders
1•gmays•27m ago•0 comments

Japan is deploying ultra-cheap cardboard drones built for swarm warfare

https://www.tomshardware.com/tech-industry/japan-is-deploying-ultra-cheap-cardboard-drones-built-...
2•_____k•28m ago•1 comments

Geography Is Four-Dimensional

https://sive.rs/4d
1•ColinWright•35m ago•0 comments

Feedback on my local-first AI assistant project?

https://github.com/joshuatic/voxel
1•joshuatic•38m ago•1 comments

Lies, damned lies, and Elastic's benchmarks

https://www.gouthamve.dev/lies-damned-lies-and-elastics-benchmarks/
1•gouthamve•44m ago•0 comments

A hacker ran me over with a robot lawn mower

https://www.theverge.com/tech/925696/yarbo-robot-lawn-mower-hack-remote-control-camera-access-mqtt
2•gnabgib•45m ago•0 comments

Does it scale? Who cares (2011)

https://jacquesmattheij.com/does-it-scale-who-cares/
1•downbad_•48m ago•1 comments

IRGC to generate revenue from undersea internet cables in Strait of Hormu

https://twitter.com/IranIntl_En/status/2053206979330392414
1•us321•49m ago•0 comments

Trump Media and Technology Group lost $406M in first three months of 2026

https://www.theguardian.com/us-news/2026/may/09/trump-media-and-technology-group-loses-406m-first...
4•vinni2•51m ago•2 comments

An Excerpt from "Go the Fuck to College" by Adam Mansbach

https://www.fatherly.com/parenting/go-the-fck-to-college-essay-adam-mansbach
1•johntfella•55m ago•1 comments

Consumer AI's ARPU Problem

https://twitter.com/SashaKaletsky/status/2051366803897766236
1•gmays•55m ago•0 comments

Can I Copyright a Song I Made with AI?

https://www.musicologize.com/can-i-copyright-a-song-i-made-with-ai/
2•speckx•58m ago•1 comments

ScalaTimes – A Free, Once-Weekly Scala News Flash

https://scalatimes.com
1•TheWiggles•58m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•1y ago

Comments

kzawpl•1y ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•1y ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/