frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Data breach at credit check giant 700Credit affects at least 5.6M

https://techcrunch.com/2025/12/12/data-breach-at-credit-check-giant-700credit-affects-at-least-5-...
1•mfiguiere•20s ago•0 comments

How do you deal with NFS auth in your homelab? Device access?

https://old.reddit.com/r/homelab/comments/1pb28gs/how_do_you_deal_with_nfs_auth_in_your_homelab/
1•sipofwater•54s ago•0 comments

From Per-Content to Monthly Support

https://www.davidrevoy.com/article1106/from-per-content-to-monthly-support
1•Tomte•1m ago•0 comments

Everyone Is Gambling and No One Is Happy

https://kyla.substack.com/p/everyone-is-gambling-and-no-one-is
1•codneprose•2m ago•0 comments

Expanded Screening and Vetting for H-1B and Dependent H-4 Visa Applicants

https://www.state.gov/content/travel/en/News/visas-news/announcement-of-expanded-screening-and-ve...
1•e12e•4m ago•1 comments

Your phone is a fake house

https://etymology.substack.com/p/your-phone-is-a-fake-house
2•rpgbr•5m ago•0 comments

Bringing Gemini translation capabilities to Google Translate

https://blog.google/products/search/gemini-capabilities-translation-upgrades/
1•xnx•5m ago•1 comments

Nuclear energy key to decarbonising Europe, says EESC

https://www.eesc.europa.eu/en/news-media/news/nuclear-energy-key-decarbonising-europe-says-eesc
2•mpweiher•6m ago•0 comments

Memory Price Surge to Persist in 1Q26; Brands Raising Prices, Downgrading Specs

https://www.techpowerup.com/343954/memory-price-surge-to-persist-in-1q26-smartphone-and-notebook-...
1•akyuu•8m ago•0 comments

My GPT-5.2 Review: Impressive, but Too Slow

https://shumer.dev/gpt52review
1•bko•10m ago•0 comments

The Green Party Gender Ideology Crisis

https://rodgercuddington.substack.com/p/the-green-party-gender-ideology-crisis
3•freespirt•10m ago•1 comments

Mapping escalating US pressure on Venezuela

https://www.reuters.com/graphics/USA-VENEZUELA/MAPS/lgvdqxnenpo/
2•giuliomagnifico•11m ago•0 comments

Show HN: Built a free USCIS form-filling tool (no Adobe required)

https://fillvisa.com/demo/
1•junaid_97•11m ago•0 comments

'Three norths' set to leave England for hundreds of years

https://www.ordnancesurvey.co.uk/news/three-norths-departing-england
2•ColinWright•12m ago•0 comments

Invisible Internet Protocol: Network without borders

https://i2pd.website/
2•poly2it•15m ago•0 comments

Claude in a Box

https://blog.parcha.dev/claude-in-a-box
4•miguelrios•15m ago•0 comments

'Architects of AI' Named Time Magazine's Person of the Year

https://www.bbc.com/news/articles/cly01mdm577o
1•internetguy•16m ago•1 comments

US to mandate AI vendors measure political bias for federal sales

https://www.reuters.com/world/us/us-mandate-ai-vendors-measure-political-bias-federal-sales-2025-...
2•beedeebeedee•17m ago•2 comments

Deciduous: Work better LLMs: use a decisions DAG and insight tools/querying

https://notactuallytreyanastasio.github.io/deciduous/
2•rhgraysonii•18m ago•1 comments

Thailand seizes millions in assets and issues arrest warrants in scams crackdown

https://www.japantimes.co.jp/news/2025/12/04/asia-pacific/crime-legal/thailand-assets-arrest-scams/
1•PaulHoule•19m ago•0 comments

Anthropic donated MCP to the Linux Foundation. What does that mean?

https://mbsamuel.substack.com/p/anthropic-donated-mcp-to-the-linux
1•waprin•21m ago•1 comments

Blocking Software Supply Chain Attacks

https://softwareengineeringdaily.com/2025/12/09/blocking-software-supply-chain-attacks-with-feros...
1•feross•24m ago•0 comments

Show HN: Roadhog – roadmaps, open issues and release notes with PostHog built in

https://www.roadhog.app/
1•chrisbigelow•24m ago•0 comments

Data sonification: a curious oddity that may have some uses

https://blog.engora.com/2025/12/data-sonification-curious-oddity-that.html
1•Vermin2000•26m ago•1 comments

An experimental, private, autonomous todo list

https://andybromberg.com/autonomous-todo-list
3•state•27m ago•0 comments

Show HN: tomcp.org – Turn any URL into an MCP server

https://github.com/Ami3466/tomcp
3•ami3466•28m ago•1 comments

Wealthfront becomes a publicly traded company

https://www.wealthfront.com/blog/wealthfront-ipo-public-company/
1•onnnon•30m ago•0 comments

Someone from Nvidia uploaded the parent folder of their upcoming model

https://twitter.com/xeophon_/status/1999394570967089630
3•thunderbong•32m ago•0 comments

The Methuselah Worm

https://news.wm.edu/2025/12/09/the-methuselah-worm/
2•geox•32m ago•0 comments

Claude Code systematically creates issues in public anthropics/Claude-code repo

https://github.com/anthropics/claude-code/issues/13797
2•TheTaytay•36m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/