frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Minima – Open-source micro-learning LMS (alternative to Moodle)

https://github.com/cobel1024/minima
1•pigon1002•1m ago•0 comments

The Home Computer Hybrids: Atari, TI, and the FCC

https://technicshistory.com/2026/01/25/the-home-computer-hybrids/
1•cfmcdonald•1m ago•0 comments

Show HN: FilaMeter – Local-first filament inventory management for 3D printing

https://filameter.com/
1•ldrrp•1m ago•0 comments

Show HN: VLM Inference Engine in Rust

https://mixpeek.com/blog/building-a-production-ready-vlm-inference-server-in-rust
1•Beefin•1m ago•0 comments

Browsh the modern text-based browser

https://www.brow.sh/docs/installation/
1•ungawatkt•2m ago•0 comments

Home Lab Developments

https://zitseng.com/archives/25229
1•todsacerdoti•4m ago•0 comments

Show HN: Poast – Publish Quickly from Claude, Cursor, ChatGPT

https://www.poast.sh/post/acb2475e-7871-4f62-9f25-3e60d38861d4
1•k0mplex•5m ago•0 comments

Show HN: ScaleLighthouse – Bulk Lighthouse, Playwright smoke tests, CrUX metrics

https://github.com/acenji/lighthouse
1•acenji•6m ago•0 comments

The Trump Family's Immigrant Story(2018)

https://www.history.com/articles/donald-trump-father-mother-ancestry
1•rolph•8m ago•0 comments

The WABL Test: Would anything of value be lost if you delete this?

https://www.gkogan.co/would-anything-of-value-be-lost/
1•gk1•8m ago•0 comments

The "Bucket Bumping" problem of airline tickets

https://www.dodgycoder.net/2026/01/the-bucket-bumping-problem-of-airline-tickets.html
1•abnercoimbre•9m ago•1 comments

Tesla FSD vs. Snow Ice Emergency Avoidance Braking Lane Perception

https://www.youtube.com/watch?v=6nwhbIOipXQ
1•hnburnsy•9m ago•0 comments

What Are the Greatest Sequels of All Time? A Statistical Analysis (2025)

https://www.statsignificant.com/p/what-are-the-greatest-sequels-of
1•speckx•11m ago•0 comments

The Underground Node Network

https://github.com/mevdschee/underground-node-network/blob/main/README.md
2•insom•12m ago•0 comments

How animators and AI researchers made 'Dear Upstairs Neighbors'

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/dear-upstairs-neighbors/
1•saikatsg•13m ago•0 comments

Dithering – Part 2: The Ordered Dithering

https://visualrambling.space/dithering-part-2/
1•ChrisArchitect•13m ago•1 comments

Show HN: Cmpsbl OS v5.5.0 – A Self-Hosting Cognitive Substrate (131k LOC)

https://zenodo.org/records/18379258
1•promptfluid•14m ago•0 comments

ChatGPT Containers can now run bash, pip/npm install packages and download files

https://simonwillison.net/2026/Jan/26/chatgpt-containers/
3•simonw•17m ago•1 comments

Show HN: PillarLabAI – A reasoning engine for prediction markets

https://pillarlabai.com/
1•simullab•19m ago•0 comments

The IndieWeb and Small Web

https://christiano.dev/post/indieweb_smallweb/
1•birdculture•20m ago•0 comments

Show HN: Hybrid Markdown Editing

https://tiagosimoes.github.io/codemirror-markdown-hybrid/
2•eropatori•20m ago•0 comments

ChatGPT subscription support in Kilo Code

https://blog.kilo.ai/p/use-chatgpt-subscription-inside-kilo
1•mustaphah•20m ago•0 comments

WorkBench-Pro – PC benchmark designed for developer workflows

https://github.com/johanmcad/WorkBenchPro
1•johanmcad•24m ago•0 comments

Win $100 in Tokens: Build any app idea in 7 days using AskCodi

https://askcodi.substack.com/p/win-100-in-tokens-build-any-app-idea
1•askcodi•25m ago•0 comments

Ammonia as an energy carrier: supply chain cost and greenhouse gas emissions

https://pubs.rsc.org/en/content/articlelanding/2026/ee/d5ee05571g
1•PaulHoule•27m ago•0 comments

Immigration: The Federal Solution

https://daviddfriedman.substack.com/p/immigration-the-federal-solution
2•mhb•27m ago•0 comments

Papal Message for 60th World Day of Social Communications Discusses AI

https://www.vatican.va/content/leo-xiv/en/messages/communications/documents/20260124-messaggio-co...
1•danielam•28m ago•0 comments

Convert Markdown to Slack Instantly

https://slackdown.com
1•bildbot•29m ago•0 comments

The Missing Layer of AI: Why Agent Memory Is the Next Frontier

https://medium.com/versanova/the-missing-layer-of-ai-why-agent-memory-is-the-next-frontier-616bb5...
1•gauravsc•32m ago•0 comments

Designing a Passively Safe API

https://www.danealbaugh.com/
1•dalbaugh•33m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•8mo ago

Comments

kzawpl•8mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•8mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/