frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Gemma 4: The new standard for local agentic intelligence on Android

https://android-developers.googleblog.com/2026/04/gemma-4-new-standard-for-local-agentic-intellig...
1•xnx•39s ago•0 comments

Show HN: Ecotokens – Another token saver for CLI Agents

https://github.com/hansipie/ecotokens
1•ansicode•46s ago•0 comments

Options for Phones at Protests

https://blog.yaelwrites.com/options-for-phones-at-protests/
1•speckx•3m ago•0 comments

SystemRescue 13 lands with Linux 6.18 and bcachefs support

https://www.theregister.com/2026/04/02/systemrescue_13/
1•Bender•4m ago•0 comments

A simple online forum written in Prolog

https://github.com/danilp-id/bbs
2•triska•4m ago•0 comments

Ruby Central report reopens wounds over RubyGems repo takeover

https://www.theregister.com/2026/04/01/ruby_central_report/
1•Bender•5m ago•0 comments

Orchestration-as-Code – Orchestration and software are the same

https://chriswood.tech/2026/03/25/orchestration-as-code/
1•gpi•6m ago•0 comments

Making Services with Go Right Way

https://snawoot.github.io/go_web_right_way.html
1•lr0•7m ago•0 comments

Show HN: Job market trends across 1,100 tech companies

https://risogroup.co/projects/hiring-pulse/
1•jamesriso•7m ago•0 comments

Teen's explicit Gemini Live encounter gets whole family banned

https://www.androidauthority.com/explicit-gemini-live-3654114/
1•speckx•7m ago•0 comments

Software Engineering Is Becoming Civil Engineering

https://christophermeiklejohn.com/ai/engineering/2026/04/01/software-engineering-is-becoming-civi...
1•matt_d•7m ago•0 comments

MultiGen: AI multiplayer doom playable in real-time on your phone and computer

https://play-multigen.com/
1•potatoescrisps•8m ago•0 comments

JPMorgan Eyes $10B Daily Blockchain Goal

https://catenaa.com/markets/global-markets/jpmorgan-blockchain-payments-kinexys-mitsubishi/
1•Murugaverl•9m ago•0 comments

Useful Quantum Computers Could Be Built with as Few as 10k Qubits

https://www.caltech.edu/about/news/caltech-team-finds-useful-quantum-computers-could-be-built-wit...
1•gmays•9m ago•0 comments

Ask HN: How do you run discovery with zero network?

1•_Tarik•14m ago•1 comments

NASA space launch sets stage for nuclear power on the moon

https://www.eenews.net/articles/nasa-space-launch-sets-stage-for-nuclear-power-on-the-moon/
1•mpweiher•15m ago•0 comments

Brute Force – Binary Tree Traversal

https://algorithm-visualizer.org/brute-force/binary-tree-traversal
1•speckx•18m ago•0 comments

I Made a Keyboard Nobody Asked For: My Experience Making TapType

https://fireborn.mataroa.blog/blog/i-made-a-keyboard-nobody-asked-for-my-experience-making-taptype/
2•birdculture•18m ago•0 comments

Show HN: Portcullis, a review gate for curl|bash

https://github.com/imjasonh/portcullis
2•ImJasonH•20m ago•0 comments

Bash Webcam-Viewer

https://flockaroo.at/blog/?view=article&id=4
1•flockaroo•22m ago•0 comments

State of Async Work in 2026

https://whimsical.com/blog/state-of-async-2026
1•SenHeng•22m ago•0 comments

Gemma 4 Just Released

https://twitter.com/demishassabis/status/2039736628659269901
1•tzury•22m ago•0 comments

Ad Insertion for MOQ: What Is Possible Today and What Comes Next

https://www.red5.net/blog/ad-insertion-for-media-over-quic/
1•mondainx•22m ago•0 comments

Show HN: I got Claude Code to run in Binary

https://github.com/topoteretes/cognee-claude-code-binary
1•vasa_•23m ago•0 comments

Surf Social

https://surf.social
1•biggestfan•23m ago•0 comments

Modern SQLite: Features You Didn't Know It Had

https://slicker.me/sqlite/features.htm
10•thunderbong•25m ago•0 comments

Cisco Time Series Model 1.0

https://github.com/splunk/cisco-time-series-model
2•technimad•26m ago•0 comments

JSON and C++26 compile-time reflection: a talk

https://lemire.me/blog/2026/03/26/json-and-c26-compile-time-reflection-a-talk/
3•arunc•29m ago•0 comments

Group Pushing Age Verification for AI Turns Out to Be Backed by OpenAI

https://gizmodo.com/group-pushing-age-verification-requirements-for-ai-turns-out-to-be-sneakily-b...
3•SilverElfin•29m ago•0 comments

Gemma 4 running on NVIDIA and AMD from Day 0 with MAX

https://www.modular.com/blog/day-zero-launch-fastest-performance-for-gemma-4-on-nvidia-and-amd
2•sparklychipmunk•30m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/