frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

November CVEs Fell 25% YoY, Driven by Slowdowns at Major CNAs

https://socket.dev/blog/november-cves-fell-25-yoy-driven-by-slowdowns-at-major-cnas
1•feross•20s ago•0 comments

Reverse Benchmarking

https://www.dominiknitsch.com/reverse-benchmarking/
1•wseqyrku•8m ago•0 comments

On the trail of Borneo's bay cat, one of the most mysterious felines

https://news.mongabay.com/2024/04/on-the-trail-of-borneos-bay-cat-one-of-the-worlds-most-mysterio...
1•thunderbong•13m ago•0 comments

Intellivision Sprint by Atari

https://atari.com/products/intellivision-sprint
1•evo_9•14m ago•0 comments

QtkTest: Go-To Human Benchmark Tool

https://qtktest.com/
1•yimiqidage001•20m ago•0 comments

Patents and Open Source: Understanding the Risks and Available Solutions

https://opensource.org/blog/patents-and-open-source-understanding-the-risks-and-available-solutio...
1•gslin•29m ago•0 comments

How to speed up the Rust compiler in December 2025

https://nnethercote.github.io/2025/12/05/how-to-speed-up-the-rust-compiler-in-december-2025.html
2•todsacerdoti•32m ago•0 comments

What I Learned from Vibe-Coding Auth with AI

https://fusionauth.io/blog/vibe-coding-authentication
1•mooreds•35m ago•0 comments

Trustworthy software through non-profits?

https://www.more-magic.net/posts/trustworthy-software-through-non-profits.html
1•sjamaan•35m ago•0 comments

Show HN: Who is hiring" search tool with chat / other features

https://nthesis.ai/public/hn-who-is-hiring
2•osigurdson•36m ago•0 comments

Speed vs. Safety: Building developer experience in a MedTech startup

https://bradleybeddoes.com/posts/building-developer-experience-in-medtech
1•vedlin•40m ago•0 comments

Walks in Rotation Spaces Return Home When Doubled and Scaled

https://arxiv.org/abs/2502.14367
1•nomilk•42m ago•1 comments

Craft Food Recipes

https://craftfoodrecipes.com/recipes
1•fastshort•44m ago•1 comments

LLM inference is nearly deterministic. We use this to audit providers

https://adamkarvonen.github.io/machine_learning/2025/11/28/difr.html
1•seraine•44m ago•0 comments

A space program can only move as swiftly as its rockets

https://jatan.space/indian-space-issue-33/
1•Brajeshwar•44m ago•0 comments

State Department to deny visas to fact checkers and others, citing 'censorship'

https://www.npr.org/2025/12/04/nx-s1-5633444/trump-content-moderation-visas-censorship
18•seattle_spring•48m ago•3 comments

Show HN: A Self-Evolving Agentic App Builder (Seeking 300 Beta Testers)

https://howone.ai/?invite=CD0AP6
2•EvoAgentX•48m ago•0 comments

Tim Pool on a Possible Magnetic Pole Shift Citing Multiple Signs [video]

https://www.youtube.com/watch?v=1lnwSEdxlhI
1•keepamovin•52m ago•3 comments

Starlink Mobile? SpaceX Trademark Filing Hints at Cellular Carrier Ambitions

https://www.pcmag.com/news/starlink-mobile-spacex-trademark-filing-hints-at-cellular-carrier-ambi...
1•TMWNN•53m ago•0 comments

Show HN: Personalized wine recommendations from a wine list

https://apps.apple.com/us/app/sip-savvy/id6747541871
1•zyncl19•54m ago•0 comments

Build Systems Are Spreadsheets

https://functional.computer/blog/build-systems-are-spreadsheets
1•judicious•57m ago•0 comments

Why Sourcegraph and Amp Are Becoming Independent Companies

https://sourcegraph.com/blog/why-sourcegraph-and-amp-are-becoming-independent-companies
2•amirathi•57m ago•0 comments

Nvidia CEO Jensen Huang admits he works 7 days a week, in a constant anxiety

https://fortune.com/2025/12/04/nvidia-ceo-admits-he-works-7-days-a-week-including-holidays-in-a-c...
2•wslh•58m ago•0 comments

T-17 Carbon Telephone Transmitter Button Mod and HodgePodge

http://k4che.com/T-17/T-17.htm
1•brudgers•1h ago•0 comments

Show HN: Driving Android app via LLM. Looking for feedback

1•philippb•1h ago•0 comments

What do you think of my landing Page designed in Canva

https://createamarketplace.com/
1•cladian•1h ago•3 comments

Netflix in exclusive talks to buy HBO

https://www.cnn.com/2025/12/04/media/netflix-paramount-wbd-bidding-war-warner-bros-discovery
12•mikeweiss•1h ago•5 comments

A Golden Year (2025): Gold's Price Surge – The Signal in the Noise

https://aswathdamodaran.blogspot.com/2025/11/a-golden-year-2025-golds-price-surge.html
1•kamaraju•1h ago•0 comments

Planet VPN

https://freevpnplanet.com/
1•bitlogicdev•1h ago•1 comments

Rats Snatching Bats Out of the Air and Eating Them–Researchers Got It on Video

https://www.smithsonianmag.com/smart-news/rats-are-snatching-bats-out-of-the-air-and-eating-them-...
2•bookofjoe•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/