frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Quiet Coup: How AI Is Rewriting Power, Wealth, and Human Agency

https://neerajkarimpuzha.wordpress.com/2026/04/18/293/
1•neeraj_r•2m ago•0 comments

Fixing DNS tail latency with a 5-line config and a 50-line function

https://numa.rs/blog/posts/fixing-doh-tail-latency.html
1•fanf2•2m ago•0 comments

Biangbiang Noodles

https://en.wikipedia.org/wiki/Biangbiang_noodles
1•thunderbong•4m ago•0 comments

China humanoid robot half-marathon to showcase technical leaps

https://www.reuters.com/world/asia-pacific/china-humanoid-robot-half-marathon-showcase-technical-...
3•JumpCrisscross•8m ago•0 comments

A brief history of C/C++ programming languages

https://lemire.me/blog/2026/04/09/a-brief-history-of-c-c-programming-languages/
1•signa11•8m ago•0 comments

Cannabis may make you remember things that never happened

https://www.nationalgeographic.com/health/article/how-cannabis-affects-memory-thc-false-recall
2•johntfella•14m ago•0 comments

Anthropic decided to shut down our organization for an alleged violation

https://twitter.com/patomolina/status/2045281665363386504
1•isolli•14m ago•1 comments

Ask HN: How do small startups, solo/lean HR agencies manage hiring pipeline?

1•kathir05•17m ago•0 comments

Show HN: I can't write Python. It works anyway

https://github.com/Wewoc/Garmin_Local_Archive
1•Wewoc•19m ago•0 comments

Laimark – 8B LLM that self-improves. Consumer GPU

https://github.com/seetrex-ai/laimark
2•jesustabares•26m ago•0 comments

Peter Thiel Is Launching an "AI Ministry of Truth" Called Objection

https://old.reddit.com/r/antiai/comments/1sngw6f/peter_thiel_is_launching_an_ai_ministry_of_truth/
3•doener•33m ago•0 comments

Men caught competing in women's category of prestigious South African marathon

https://www.cnn.com/2026/04/17/sport/men-found-womens-category-sa-marathon-intl-scli
1•breve•33m ago•0 comments

Grok TTS and STT APIs

https://x.ai/news/grok-stt-and-tts-apis
2•chopete3•33m ago•1 comments

BibCrit – LLM grounded in ETCBC corpus data for Biblical textual criticism

https://github.com/Jossifresben/BibCrit
1•jossifresben•39m ago•0 comments

Long Covid Diagnostic Out of Stanford

https://join.muno.bio/
2•limalabs•43m ago•0 comments

Forsp: A Forth+Lisp hybrid lambda calculus language (2024)

https://xorvoid.com/forsp.html
1•HeliumHydride•44m ago•0 comments

The Art of the Fictional Pop Song

https://www.newyorker.com/culture/pop-music/the-art-of-the-fictional-pop-song
2•fortran77•45m ago•0 comments

America Lost the Mandate of Heaven

https://geohot.github.io//blog/jekyll/update/2026/04/18/america-mandate-of-heaven.html
3•mefengl•48m ago•0 comments

Claude Opus wrote a Chrome exploit for $2,283

https://www.theregister.com/2026/04/17/claude_opus_wrote_chrome_exploit/
3•Mohansrk•49m ago•0 comments

Purdue University CS240 Class over 50% of students 'caught' using AI on homework

https://old.reddit.com/r/Purdue/comments/1sogfb4/comment/ogsvymy/
1•twaldin•54m ago•2 comments

Unweight: Lossless MLP Weight Compression for LLM Inference

https://research.cloudflare.com/nikulin2026/
2•jgrahamc•55m ago•0 comments

Helpmate-Live, Social and AI Chat with Built-In CRM for WordPress

1•RhapsodyPlugins•59m ago•0 comments

Show HN: A delivery gate that automatically releases files when invoice is paid

1•pixelatedRudy•1h ago•1 comments

GloraMD Face Lift Serum

https://www.facebook.com/GloraMDFaceLiftSerumUS
1•bbangerr•1h ago•0 comments

I made a self-employed expense keeper

https://bizlect.com
1•ispaceman•1h ago•0 comments

Garry Tan – On the LOC Controversy

https://twitter.com/garrytan/status/2045404377226285538
1•helloplanets•1h ago•0 comments

48 domains produce 22.5% of ChatGPT's B2B citations

https://growtika.com/blog/chatgpt-citation-economy
2•Growtika•1h ago•0 comments

Soul.md – open file format for AI agent identity

https://github.com/AntonioTF5/soul-spec
1•afonie•1h ago•0 comments

Eating fruits, vegetables and whole grains may increase chance of lung cancer

https://news.keckmedicine.org/eating-fruits-vegetables-and-whole-grains-may-increase-chance-of-ea...
3•geox•1h ago•3 comments

F1 in China: I've never seen so many people in those grandstands

https://arstechnica.com/cars/2026/03/f1-in-china-ive-never-seen-so-many-people-in-those-grandstands/
1•PaulHoule•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/