frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Do links hurt news publishers on Twitter? Our analysis suggests yes

https://www.niemanlab.org/2026/04/do-links-hurt-news-publishers-on-twitter-our-analysis-suggests-...
1•giuliomagnifico•5m ago•0 comments

Nigel Farage wants to build a British ICE. Starmer may have handed him the tools

https://www.thenerve.news/p/reform-deportation-operation-restoring-justice-data-surveillance-pala...
1•doener•6m ago•0 comments

Fast, cheap AI-assisted decompilation of binary code is here

https://twitter.com/esrtweet/status/2042002143045890412
1•tosh•7m ago•0 comments

Engineers Are Great for Marketing

https://www.usenotra.com/blog/engineers-are-great-marketing
1•DominikKoch•8m ago•0 comments

Largest Dutch pension fund cuts ties with controversial tech firm Palantir

https://nltimes.nl/2026/04/02/largest-dutch-pension-fund-cuts-ties-controversial-tech-firm-palantir
3•doener•9m ago•0 comments

Cisco: Cybersecurity Remains Top Challenge as Industrial AI Adoption Expands

https://techgraph.co/tech/cisco-cybersecurity-remains-top-challenge-as-industrial-ai-adoption-exp...
1•visitednews•10m ago•0 comments

FalconFly 3dfx Archive

https://3dfxarchive.com/3dfx.htm
1•BruceEel•11m ago•0 comments

Influence Campaign on TikTok Uses AI Videos to Boost Hungary's Orbán

https://www.newsguardtech.com/special-reports/influence-campaign-uses-ai-tiktok-videos-to-boost-h...
1•doener•14m ago•0 comments

Reallocating $100/Month Claude Code Spend to Zed and OpenRouter

https://braw.dev/blog/2026-04-06-reallocating-100-month-claude-spend/
1•kisamoto•15m ago•0 comments

Škoda's Duobell bicycle bell outsmarts ANC headphones

https://www.heise.de/en/news/koda-s-Duobell-bicycle-bell-outsmarts-ANC-headphones-11249665.html
1•thdr•15m ago•0 comments

Content Giant Slashed Telemetry Cost 79%, Saved $1.2M

https://www.mydecisive.ai/blog/content_giant_case_study
1•jratkevic•19m ago•0 comments

A study linked various SAT test scores to favorite bands

https://twitter.com/arcticinstincts/status/2041936594601701393
2•MrBuddyCasino•21m ago•0 comments

We Have Become Obsessed with Attachment. And It Is Causing Harm

https://whatwouldjesssay.substack.com/p/we-have-become-obsessed-with-attachment
1•rendx•23m ago•0 comments

Some Better Defaults for Emacs

https://git.sr.ht/~technomancy/better-defaults/blob/main/better-defaults.el
1•fanf2•28m ago•1 comments

PBXN-110

https://en.wikipedia.org/wiki/Polymer-bonded_explosive
2•simonebrunozzi•30m ago•0 comments

Ask HN: What is the future of Devs, after launch of Anthropic's Glasswing?

3•shivang2607•34m ago•0 comments

No fine-tuning, no RAG – boosting Claude Code's bioinformatics up to 92%

https://github.com/jaechang-hits/SciAgent-Skills
1•jaechang•35m ago•1 comments

Opera 130 stable arrives with Chromium 146 and Twitch support

https://www.notebookcheck.net/Opera-130-stable-arrives-with-Chromium-146-and-Twitch-support.12697...
2•DarrylLinington•35m ago•0 comments

cppreference.com has been under maintenance for a year

https://en.cppreference.com/
1•GalaxySnail•35m ago•0 comments

Veteran artist behind Mass Effect, Halo, & Overwatch 2 weighs in on Nvidia DLSS5

https://www.notebookcheck.net/Veteran-artist-behind-Mass-Effect-Halo-and-Overwatch-2-weighs-in-on...
1•DarrylLinington•36m ago•0 comments

I was copy-pasting to Claude from WhatsApp – so I fixed that

https://github.com/sliamh11/Deus
1•sliamh11•37m ago•1 comments

From bytecode to bytes: automated magic packet generation

https://blog.cloudflare.com/from-bpf-to-packet/
1•syscll•40m ago•0 comments

Show HN: Giving My First Pitch at 1M Cups Using a Custom Mobile App

https://andonalert.net/dev-blog/giving-my-first-pitch-at-1-million-cups
3•SolarpunkRachel•45m ago•0 comments

Neural Computers

https://arxiv.org/abs/2604.06425
2•50kIters•46m ago•0 comments

A hacker has allegedly breached one of China's supercomputers

https://www.cnn.com/2026/04/08/china/china-supercomputer-hackers-hnk-intl
2•tamnd•50m ago•0 comments

Amazon Cuts Kindle Store Access for 2012 and Older Kindle Models Starting May 20

https://www.ghacks.net/2026/04/09/amazon-cuts-kindle-store-access-for-2012-and-older-kindle-model...
1•penguin_booze•51m ago•0 comments

Ask HN: How do you monitor and debug integrations in production?

1•OdinSpecc•54m ago•0 comments

Seedance 2.0 on live–their strongest multimodal AI video model with native audio

https://seedance2video.cloud/
1•bingbing123•55m ago•0 comments

Show HN: I built a free open-source SVG to 3D tool

https://3dsvg.design
2•renatoworks•55m ago•1 comments

Today Is CSS Naked Day

https://css-naked-day.org/?
2•edent•56m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/