frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Codesprint – A typing game for practicing coding interview syntax

https://github.com/cwklurks/codesprint
1•cwkcwk•1m ago•0 comments

Wall Street Races to Cut Its Risk from AI's Borrowing Binge

https://finance.yahoo.com/news/wall-street-races-cut-risk-113000304.html
1•thewebguyd•5m ago•0 comments

Claude Code made $1B in 6 months – my AI-coded iPhone app shows why

https://www.zdnet.com/article/claude-code-made-an-astonishing-1b-in-6-months-and-my-own-ai-coded-...
1•dxs•5m ago•0 comments

Releasebot – Every Release Note and Changelog in One Place

https://releasebot.io/
1•ArmageddonIt•5m ago•0 comments

First Dates with Mr. Meeseeks

https://backnotprop.substack.com/p/50-first-dates-with-mr-meeseeks
1•ramoz•8m ago•0 comments

Show HN: SerpApi MCP Server

https://github.com/serpapi/serpapi-mcp
4•thefoolofdaath•8m ago•0 comments

We Built Lightpanda in Zig

https://lightpanda.io/blog/posts/why-we-built-lightpanda-in-zig
2•ashvardanian•9m ago•0 comments

Keep Effects at the Edges

https://agentultra.com/blog/keep-effects-at-the-edges/
1•vitalnodo•9m ago•0 comments

Norway: Ruter Examines Cybersecurity Risks in Chinese Electric Buses

https://news.busworld.org/article/302123/norway-ruter-examines-cybersecurity-risks-in-chinese-ele...
1•gscott•10m ago•0 comments

Hepatitis B vaccine guidance set to be rolled back for US babies

https://www.nature.com/articles/d41586-025-03937-1
4•Amorymeltzer•11m ago•0 comments

Bitbucket self-hosted runner will cost $15/month

https://www.atlassian.com/blog/bitbucket/announcing-v5-self-hosted-runners
2•tcptomato•13m ago•0 comments

Wall Street races to protect itself from AI bubble

https://rollingout.com/2025/12/05/wall-street-protects-itself-ai-bubble/
7•zerosizedweasle•17m ago•0 comments

Software Taboos

http://rebuildworld.net/taboo/
3•pg83•17m ago•1 comments

Agents Training Agents: A practical architecture for autonomous self-improvement

https://techlife.blog/posts/agents-training-agents-a-practical-architecture-for-autonomous-self-i...
2•tsenturk•17m ago•1 comments

The Patient Is Not a Document: Moving from LLMs to a World Model for Oncology

https://blog.standardmodel.bio/p/the-patient-is-not-a-document-moving
3•kevinalexbrown•19m ago•0 comments

2025.49: Conflicts, Consternation, and Code Red

https://stratechery.com/2025/conflicts-consternation-and-code-red/
1•feross•21m ago•0 comments

Apple's Return to Intel Rumored to Extend to iPhone

https://www.macrumors.com/2025/12/05/intel-iphone-chips-rumor/
1•tosh•22m ago•0 comments

50 Years of Proof Assistants

https://lawrencecpaulson.github.io//2025/12/05/History_of_Proof_Assistants.html
2•baruchel•26m ago•0 comments

Show HN: Heart rate with phone camera (plain HTML/JS)

https://github.com/SMUsamaShah/heart-rate
1•smusamashah•26m ago•0 comments

JavaScript Engines Zoo

https://zoo.js.org/
4•gurgunday•26m ago•0 comments

MongoDB Earnings Call Might Have Topped the AI Trade

https://knowtrend.ai/blog/mongodb-postgres
1•codevs•27m ago•0 comments

3D in CSS (No JavaScript)

https://codepen.io/Cubiq-ish/pen/myVNNoe
1•qingcharles•29m ago•0 comments

Are large language models worth it?

https://nicholas.carlini.com/writing/2025/are-llms-worth-it.html
1•PaulHoule•29m ago•0 comments

WikiFlix: Full Movies Hosted on Wikimedia Commons

https://commons.wikimedia.org/wiki/User:Spinster/WikiFlix
11•netule•30m ago•1 comments

A full-body MRI can reveal hidden killers. Do we want to know?

https://www.washingtonpost.com/health/2025/12/05/full-body-mri-scan-experience/
3•pseudolus•30m ago•2 comments

The Resonant Computing Manifesto

https://simonwillison.net/2025/Dec/5/resonant-computing/
1•mooreds•31m ago•2 comments

We invested 10% to pay back tech debt; Here's what happened (2023)

https://blog.alexewerlof.com/p/tech-debt-day
1•mooreds•33m ago•0 comments

Global Depression Is Coming Sooner Than Expected [video]

https://www.youtube.com/watch?v=UBMluINRans
2•mooreds•35m ago•0 comments

Show HN: I won Half Baked x Bolt Hackathon (20k participants) with Claim Watch

https://claim.watch/
1•ma1or•35m ago•0 comments

Malicious Crate Mimicking 'Finch' Exfiltrates Credentials via a Hidden

https://socket.dev/blog/malicious-crate-mimicking-finch-exfiltrates-credentials
1•feross•36m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/