frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

So – – – Is AI a Bubble?

https://blog.andrewyang.com/p/so-is-ai-a-bubble
1•3Sophons•36s ago•0 comments

Why Is Video Editing So Bad on Linux Compared to Windows with Camtasia?

https://nickjanetakis.com/blog/why-is-video-editing-so-bad-on-linux-compared-to-windows-with-camt...
1•nickjj•46s ago•0 comments

The whole point of OpenAI's Responses API is to help them hide reasoning traces

https://www.seangoedecke.com/responses-api/
1•breadislove•1m ago•0 comments

The Internet forgets, but I don't want to

https://alexwlchan.net/2025/social-media-scrapbook/
1•birdculture•2m ago•0 comments

Energy Predictions 2025

https://caseyhandmer.wordpress.com/2025/12/08/energy-predictions-2025/
1•ianrahman•2m ago•0 comments

Friendly reminder: NPM classic tokens will be revoked today

1•valtlfelipe•3m ago•0 comments

Show HN: I built a voice task manager without AI using compromise.js

https://tickk.app/
1•digi_wares•6m ago•0 comments

But, I worry, because I can see the cracks in the wall

https://www.baldurbjarnason.com/2025/but-i-worry/
2•speckx•6m ago•0 comments

More New Amiga Games: Phantom Leap, Freak-Out, and Master of Minefields

https://www.epsilonsworld.com/2025/12/more-new-amiga-games-jaz-drive-and.html
2•doener•7m ago•0 comments

Spain's Civil Guard Arrests Alleged Leader of 260M Euro Crypto Ponzi Scheme

https://www.coindesk.com/business/2025/11/09/spain-s-civil-guard-arrests-alleged-leader-of-260m-e...
1•PaulHoule•8m ago•0 comments

Accessible AI part 1: moving fast and breaking things

https://devinprater.substack.com/p/accessible-ai-part-one
1•devinprater•9m ago•0 comments

Digital House Arrest – How the EU Wants to Disempower Families

https://www.patrick-breyer.de/en/digital-house-arrest-how-the-eu-wants-to-disempower-families/
1•baobun•10m ago•0 comments

On Rails Podcast: How Testing Platform Rainforest QA Tests Itself [video]

https://www.youtube.com/watch?v=ujBS_lN6Dsw
1•robbyrussell•10m ago•1 comments

How to Sell AI to PE

https://substack.com/@nextword/p-180559068
1•gk1•10m ago•0 comments

Emu68: M68K Emulation for ARM

https://github.com/michalsc/Emu68
1•doener•11m ago•0 comments

Hurts blames himself for 5 turnovers in OT loss, 22-19

https://figyj.blogspot.com/2025/12/hurts-blames-himself-for-5-turnovers-in.html
1•FIGYJ•11m ago•0 comments

Obscuring P2P Nodes with Dandelion

https://www.johndcook.com/blog/2025/12/08/dandelion/
2•ibobev•12m ago•0 comments

Fourier transform of a constant function informal and rigorous

https://www.johndcook.com/blog/2025/12/08/fourier-transform-dc/
1•ibobev•12m ago•0 comments

"Everyone is so..": Entry-level tech workers describe the AI-fueled jobpocalypse

https://restofworld.org/2025/engineering-graduates-ai-job-losses/
1•g-b-r•15m ago•1 comments

Publish, Review, Curate to upend scholarly publishing

https://anil.recoil.org/notes/coar-prc
1•fanf2•16m ago•0 comments

Intellij IDEA now supports C/C++

https://plugins.jetbrains.com/plugin/28804-c-c--language-support
2•Otter-man•16m ago•1 comments

Go Proposal: Secret Mode

https://antonz.org/accepted/runtime-secret/
6•todsacerdoti•18m ago•0 comments

Advanced Deep Learning for Physics (ADL4P)

https://tum-pbs.github.io/ADL4P/
1•nabla9•20m ago•0 comments

Show HN: Relia – Open-Source "ESLint" for AWS Costs (Python, Local-First)

https://github.com/davidahmann/relia_oss
1•davidresilify•24m ago•0 comments

What happened to Gopher? The Internet we lost [video]

https://www.youtube.com/watch?v=Flo9kn_nhbg
4•rickcarlino•25m ago•0 comments

How Stablecoins Can Help Criminals Launder Money and Evade Sanctions

https://www.nytimes.com/2025/12/07/technology/how-a-cryptocurrency-helps-criminals-launder-money-...
2•throw0101a•26m ago•1 comments

Ten people who helped shape science in 2025

https://www.nature.com/immersive/d41586-025-03848-1/index.html
2•bookofjoe•26m ago•0 comments

China set to limit access to Nvidia's H200 chips despite Trump export approval

https://www.ft.com/content/c4e81a67-cd5b-48b4-9749-92ecf116313d
2•mohi-kalantari•31m ago•0 comments

A Real-World Look at a Multi-Turn AI Attack Attempt

https://predictabledialogs.com/learn/security/ai-security-multi-turn
1•jaikant•32m ago•0 comments

Why Scanners Fail in Practice: Lessons from the Shai-Hulud Attacks on NPM

https://www.codecentric.de/en/knowledge-hub/blog/why-scanners-fail-in-practice-lessons-from-the-s...
1•F30•33m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/