frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Mushrooms as Rainmakers: How Spores Act as Nuclei for Raindrops

https://pmc.ncbi.nlm.nih.gov/articles/PMC4624964/
1•andsoitis•3m ago•0 comments

Waymo fleet halts in San Francisco during power outages

https://twitter.com/breaking911/status/2002568542835876194
1•defly•5m ago•0 comments

Right to Know Request Related to ALPR in Harrisburg Pa

http://citizenscounterintelligence.com/
1•pcgeller•16m ago•1 comments

Autonomous penetration-testing copilot that orchestrates 20 Kali-grade tools

https://github.com/sirspyr0/security-ai-agent-public
2•sirspyr0•22m ago•0 comments

Show HN: SafeShare Pro – a local-first URL cleaner (removes tracking params)

https://j-ai-71.github.io/Supersystem/
1•safeshare•24m ago•0 comments

The Texas Instruments CC-40 invades Gopherspace (plus TI-74 BASICALC)

http://oldvcr.blogspot.com/2025/12/the-texas-instruments-cc-40-invades.html
3•todsacerdoti•26m ago•0 comments

Google killed the 25-year-old Sega Dreamcast PlanetWeb 3.0 web browser this week

https://www.tomshardware.com/video-games/retro-gaming/the-sega-dreamcasts-planetweb-3-0-browser-w...
4•mmcclure•27m ago•1 comments

Python Software Foundation, National Science Foundation, and Integrity

https://harihareswara.net/posts/2025/python-software-foundation-national-science-foundation-and-i...
1•lumpa•28m ago•0 comments

Show HN: RentViz, vibe-coded single-SVG rental income visualization

https://github.com/Ericson2314/rentviz
1•Ericson2314•31m ago•0 comments

Measuring AI Ability to Complete Long Tasks: Opus 4.5 has 50% horizon of 4h49M

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
24•spicypete•32m ago•2 comments

Show HN: I vibe-coded a working macOS driver for an obsolete laser engraver

https://github.com/leftouterjoins/EpilogDriver
3•earsayapp•34m ago•0 comments

Humanity May Reach Singularity Within Just 4 Years, Trend Shows

https://www.popularmechanics.com/technology/robots/a69820101/when-the-singularity-will-happen-tre...
3•indigodaddy•36m ago•1 comments

Keriacord

https://blog.cloudflare.com/welcome-to-connectivity-cloud/
1•keriacord•38m ago•0 comments

Web 4: The AI-Native Web

https://webfourisnow.com/
2•cboulio•44m ago•0 comments

Apple allows third party app stores in Japan

https://www.apple.com/newsroom/2025/12/apple-announces-changes-to-ios-in-japan/
1•smugma•57m ago•2 comments

AI News from Hacker News

https://ai-hn.com
2•buluzhai•1h ago•0 comments

PG&E outages in S.F. leave 130k without electricity

https://www.sfchronicle.com/sf/article/pg-e-outage-40-000-customers-without-power-21254326.php
13•hamandcheese•1h ago•1 comments

Making the Case for a World Wild Web

https://aneeshsathe.com/2025/12/19/logic-of-the-thicket-and-the-unsearchable-web/
2•boredgargoyle•1h ago•0 comments

Social Media Website

https://socialmediaapp-ffcdechnhffxbtfd.canadacentral-01.azurewebsites.net/Account/Login?ReturnUr...
1•dblanke•1h ago•0 comments

EDF estimates EPR2 programme cost at EUR72.8B

https://www.world-nuclear-news.org/articles/edf-estimates-epr2-programme-costs-at-eur728-billion
1•chickenbig•1h ago•0 comments

Distributional AGI Safety (DeepMind)

https://arxiv.org/abs/2512.16856
4•dcre•1h ago•0 comments

Epstein Files Photos Disappear from Government Website, Including One of Trump

https://www.nytimes.com/2025/12/20/us/politics/trump-epstein-files-government-website.html
13•JumpCrisscross•1h ago•2 comments

William Golding's Island of Savagery

https://www.historytoday.com/archive/portrait-author-historian/william-goldings-island-savagery
1•samclemens•1h ago•1 comments

Faster Practical Modular Inversion

https://purplesyringa.moe/blog/faster-practical-modular-inversion/
1•todsacerdoti•1h ago•0 comments

Americans are hungry for community. So why don't we have European-style squares?

https://www.cnn.com/2025/12/19/travel/europe-public-squares-american-development
5•rawgabbit•1h ago•6 comments

Speech to text model for healthcare-based voice applications

https://huggingface.co/google/medasr
2•tzury•1h ago•0 comments

Show HN: Spelling Bee Trainer

https://spell2.netlify.app/
1•rahimnathwani•1h ago•0 comments

Show HN: SHA-256 quasi-collision with 184/256 matching bits

https://github.com/POlLLOGAMER/SHA-256-Colision-Finder-NEW.ipynb/blob/main/Base_Version/SHA_256_C...
1•KaoruAK•1h ago•0 comments

Google sues web scraper for sucking up search results 'at an astonishing scale'

https://www.theverge.com/news/848365/google-scraper-lawsuit-serpapi
4•tzury•1h ago•1 comments

What Is Threads Trends Tool?

1•Tech_News_Daily•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/