frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What I changed in how I use Claude Code after Anthropic's postmortem

1•cinooo•1m ago•0 comments

The China chip hype seems to be inference only. Is Jensen's worry true?

https://twitter.com/natolambert/status/2049634340561436966
1•jwzxgo•17m ago•0 comments

Three Cobblers, One Zhuge Liang: Making Cheaper Models Work Together

https://markhuang.ai/blog/three-cobblers-one-zhuge-liang-ai-architecture
2•zh_code•18m ago•0 comments

The Zig project's rationale for their firm anti-AI contribution policy

https://simonwillison.net/2026/Apr/30/zig-anti-ai/
2•lumpa•19m ago•1 comments

Elon Musk's worst enemy in court is Elon Musk

https://www.theverge.com/tech/921022/elon-musk-cross-openai-altman
4•granzymes•19m ago•1 comments

The Lightening of Intent

https://aneeshsathe.substack.com/p/the-lightening-of-intent
1•boredgargoyle•20m ago•0 comments

Generation Alpha

https://en.wikipedia.org/wiki/Generation_Alpha
1•keepamovin•26m ago•0 comments

Musk Says He 'Was a Fool' to Provide OpenAI's Early Funding

https://www.nytimes.com/2026/04/29/technology/musk-openai-trial-altman.html
4•1vuio0pswjnm7•26m ago•0 comments

Musk casts himself as AI's good guy in testimony vs. OpenAI

https://www.axios.com/2026/04/30/musk-openai-safety-grok
3•1vuio0pswjnm7•27m ago•0 comments

7-Zip 26.01 (7zip) – A free file archiver for high compression

https://sourceforge.net/p/sevenzip/discussion/45797/thread/555e132ba4/
2•neustradamus•28m ago•1 comments

FDA alleges 'manipulated' data supported approval of Amgen's autoimmune drug

https://www.biospace.com/fda/fda-alleges-manipulated-data-supported-approval-of-amgens-autoimmune...
2•randycupertino•33m ago•1 comments

Zulip 12.0 Released

https://blog.zulip.com/2026/04/27/zulip-12-0-released/
2•tabbott•34m ago•0 comments

Wanman: Open-source agent matrix network with JSON-RPC communications

https://github.com/chekusu/wanman/
3•imWildCat•41m ago•0 comments

Open-source briefing packets and citizen-action toolkits

https://github.com/ClosedNetwork/closed-network-flock-resources
2•pkaeding•41m ago•0 comments

Copilot Student GPT-5.3-Codex removal from model picker

https://github.blog/changelog/2026-04-27-copilot-student-gpt-5-3-codex-removal-from-model-picker/
1•aaronsung•43m ago•1 comments

AInvest

https://www.ainvest.com
2•Yang_Ruichen•44m ago•0 comments

Show HN: Agent that refuses to run commands without human approval

https://github.com/few-sh/fewshell
3•hexer303•45m ago•0 comments

Microsoft lifts 2026 AI spend by $25B to cover component price rises

https://www.theregister.com/2026/04/30/microsoft_q3_2026/
4•omer_k•47m ago•0 comments

A Grounded Conceptual Model for Ownership Types in Rust

https://cacm.acm.org/research-highlights/a-grounded-conceptual-model-for-ownership-types-in-rust/
5•tkhattra•48m ago•0 comments

Have You Seen the New Excel?

https://idiallo.com/blog/have-you-seen-the-new-xl-ai-parody
7•jnord•49m ago•0 comments

Neural similarity predicts whether strangers become friends

https://www.nature.com/articles/s41562-025-02266-7#Sec2
3•E-Reverance•50m ago•0 comments

Craig Venter has died

https://www.jcvi.org/media-center/j-craig-venter-genomics-pioneer-and-founder-jcvi-and-diploid-ge...
53•rdl•50m ago•11 comments

On the stand, Elon Musk can't escape his own tweets

https://techcrunch.com/2026/04/29/on-the-stand-elon-musk-cant-escape-his-own-tweets/
4•jnord•50m ago•0 comments

The feed doesn't know you, and YouTube refuses to let you browse

https://evilgeniuslabs.ca/blog/the-feed-doesnt-know-you
3•paulpauper•54m ago•0 comments

We Don't Know How A.I. Works. That's a Problem

https://www.nytimes.com/2026/04/15/magazine/ai-black-box-interpretability-research.html
3•lxm•54m ago•1 comments

When a tornado hits after US Government mass-deploy auto kill-switch

https://twitter.com/gatlin_didier/status/2049617318112534743
2•egberts1•1h ago•0 comments

Failed AI tractor company lays off all employees, abandons Bay Area headquarters

https://www.sfgate.com/tech/article/monarch-ai-tractor-failure-22183476.php
5•randycupertino•1h ago•0 comments

Show HN: WorkProof – JSON schema for skill evidence graphs

https://github.com/TalentProof/workproof-schema
2•parth4•1h ago•0 comments

Botfiles: Dotfiles-esque setup for Managing Agents

https://twitter.com/curious_queue/status/2049660997993152855
2•sourya4•1h ago•1 comments

Zwift buys Rouvy in shake-up of indoor cycling

https://www.bikeradar.com/news/zwift-buys-rouvy-in-massive-shake-up-of-indoor-cycling
3•obilgic•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•12mo ago

Comments

kzawpl•12mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•12mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/