frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Transformer.js v4

https://huggingface.co/blog/transformersjs-v4
1•sroussey•3m ago•1 comments

GitHub backs down, kills Copilot pull-request ads after backlash

https://www.theregister.com/2026/03/30/github_copilot_ads_pull_requests/
2•jjgreen•4m ago•0 comments

High memory usage in Postgres is good

https://planetscale.com/blog/high-memory-usage-in-postgres-is-good-actually
1•cyndunlop•5m ago•0 comments

6o6 v1.1: Faster 6502-on-6502 virtualization for a C64/Apple II Apple-1 emulator

http://oldvcr.blogspot.com/2026/03/6o6-v11-faster-6502-on-6502.html
1•homarp•5m ago•0 comments

Show HN: I made a cheaper alternative to Claude Code or Codex CLI

https://sweetcli.com/
2•gr00ve•6m ago•0 comments

Hermes Agent v0.6.0 solves its biggest weakness against OpenClaw

https://efficienist.com/hermes-agent-v0-6-0-finally-solves-its-biggest-weakness-against-openclaw/
1•jenic_•13m ago•0 comments

Judge halts Nexstar/Tegna merger after FCC let firms exceed TV ownership limit

https://arstechnica.com/tech-policy/2026/03/judge-halts-nexstar-tegna-merger-after-fcc-let-firms-...
1•strongpigeon•16m ago•0 comments

Delve into Compliance Theatre

https://www.complexsystemspodcast.com/episodes/delve-into-compliance-theatre/
1•flypunk•16m ago•0 comments

Love Dart

https://en.wikipedia.org/wiki/Love_dart
2•magicbuzz•16m ago•0 comments

Quay.io down: Nginx 502 bad gateway

https://status.redhat.com/incidents/04vm68m00t61
2•starefossen•18m ago•1 comments

Glyphs 3

https://glyphsapp.com
2•hmokiguess•18m ago•0 comments

When the Real World Came to Wicker Park (2021)

https://www.chicagomag.com/chicago-magazine/august-2021/when-the-real-world-came-to-wicker-park/
2•mooreds•20m ago•0 comments

AgentHandover: Watches you work then teaches your AI agents to do it like you

https://github.com/sandroandric/AgentHandover
2•ainthusiast•21m ago•1 comments

Run open-source AI skills in one click in the cloud

https://github.com/ReByteAI/run-any-skill-with-single-click
2•sonicgg•25m ago•0 comments

Is it safe to turn off your ad blocker?

https://blog.zgp.org/is-it-safe-to-turn-off-your-ad-blocker/
1•speckx•26m ago•0 comments

Mistral raises $830M to build Nvidia-powered AI centres in Europe

https://www.ft.com/content/229f4f59-d518-4e00-abd6-5a5b727cd2aa
5•JumpCrisscross•27m ago•0 comments

Explore the top 1000 most-starred repositories on GitHub

https://repo-explorer-nu.vercel.app/
3•geox•29m ago•0 comments

Ask HN: What's your favorite number, and why?

3•QuantumNomad_•29m ago•11 comments

Office, Messaging and Verbs

https://www.ben-evans.com/benedictevans/2015/5/21/office-messaging-and-verbs
2•AftHurrahWinch•30m ago•1 comments

Kelsey Hightower: "Everyone is a junior engineer when it comes to AI"

https://thenewstack.io/hightower-ai-open-source-kubecon/
3•mooreds•31m ago•0 comments

A Crisis Merely Postponed

https://www.nytimes.com/2011/08/03/opinion/the-debt-crisis-merely-postponed.html
3•jonathanehrlich•32m ago•0 comments

Ask HN: Is Google down or is it just me?

3•ahmedfromtunis•35m ago•1 comments

Joins Are Not Expensive

https://www.database-doctor.com/posts/joins-are-not-expensive
5•thunderbong•37m ago•0 comments

Market Participation Is Exhausting

https://pluralistic.net/2026/03/30/players-of-games/
2•hn_acker•37m ago•0 comments

A curated corpus of incidents and attack vectors for autonomous AI agents

https://github.com/h5i-dev/awesome-ai-agent-incidents
1•syumei•43m ago•0 comments

Microsoft plan 100% native Windows 11 apps in major shift away from web wrappers

https://www.techspot.com/news/111872-microsoft-plans-100-native-windows-11-apps-major.html
3•shaicoleman•43m ago•0 comments

White House Renamed 'Epstein Island' on Google Pixel Phones

https://www.washingtonpost.com/style/power/2026/03/27/white-house-google-database-epstein/
13•thisislife2•43m ago•1 comments

There are no happy endings on the internet

https://embedded.substack.com/p/there-are-no-happy-endings-on-the
1•herbertl•44m ago•0 comments

UK Politicians Continue to Miss the Point in Latest Social Media Ban Proposal

https://www.eff.org/deeplinks/2026/03/uk-politicians-continue-miss-point-latest-social-media-ban-...
3•hn_acker•44m ago•0 comments

Ubuntu MATE Is Seeking a New Primary Maintainer

https://discourse.ubuntu.com/t/ubuntu-mate-seeking-maintainers/79264
3•apple4ever•45m ago•2 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/