frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I got tired of messy AI tool lists – so I built a clean, searchable one for devs

https://aithings.dev
1•rutagandasalim•12s ago•1 comments

Show HN: NodeLoop – Hub for electronics design knowledge and tools

https://nodeloop.org/
1•eezZ•1m ago•0 comments

Steam censors LGBTQ+ content on behalf of the Russian Government

https://www.videogamesindustrymemo.com/p/how-steam-censors-lgbtq-content-on
2•HelloUsername•1m ago•0 comments

Deep Work vs. the Cyborg Hyperactive Cracked-Out Agent Allocator

https://bengoldhaber.substack.com/p/deep-work-vs-the-cyborg-hyperactive
1•lindowe•2m ago•0 comments

Lowtype: Elegant Types in Ruby

https://codeberg.org/Iow/type
1•todsacerdoti•6m ago•0 comments

Ask HN: Why does scale values between -1 and 1 return nothing on Google search

1•sonabinu•8m ago•0 comments

GOG Patrons

https://www.gog.com/en/patrons
2•HunOL•8m ago•0 comments

We're Losing Our Voice to LLMs

https://tonyalicea.dev/blog/were-losing-our-voice-to-llms/
2•TonyAlicea10•9m ago•0 comments

The Idiot Sandwich – On Embedding Alt Text

https://shkspr.mobi/blog/2025/11/the-idiot-sandwich-on-embedding-alt-text/
1•ColinWright•9m ago•0 comments

When Ants Are Smarter Than People

https://aethermug.com/posts/my-notes-on-when-ants-are-smarter-than-people
1•mrcgnc•11m ago•0 comments

KDE going all-in on a Wayland future

https://blogs.kde.org/2025/11/26/going-all-in-on-a-wayland-future/
1•birdculture•12m ago•0 comments

Don't be a scary old guy: My 40s survival strategy with charm

https://www.devas.life/dont-be-a-scary-old-guy-my-40s-survival-strategy-with-charm/
3•ashleynewman•12m ago•0 comments

Tattoo ink induces lymph node inflammation&alters immune response to vaccination

https://www.pnas.org/doi/10.1073/pnas.2510392122
2•bookofjoe•12m ago•0 comments

LLM Observatory

https://llm-observatory.org/index.html
1•myth_drannon•14m ago•0 comments

Show HN: Henry Perigal's Visual Proof of the Pythagoras Theorem

https://do-say-go.github.io/insights/others/interactive_perigals_pythagorean.html
2•keepamovin•20m ago•1 comments

Tell HN: Happy Thanksgiving – Grateful

3•emreb•22m ago•1 comments

Show HN: Auto-Unpublish NPM Packages Published Outside CI

https://github.com/telophasehq/tangent-plugins/tree/main/detections/sha1hulud/npmcicorrelation
4•ethanblackburn•22m ago•1 comments

Show HN: SyncKit – Offline-first sync engine (Rust/WASM and TypeScript)

https://github.com/Dancode-188/synckit
5•danbitengo•28m ago•1 comments

Toll in Hong Kong fire rises to 65, police cite 'grossly negligent' firm

https://www.reuters.com/world/china/hong-kong-tower-fire-toll-rises-44-police-arrest-three-2025-1...
3•Inocez•29m ago•0 comments

Use Minimal APIs over Controllers for new apps

https://www.roundthecode.com/dotnet-blog/why-you-must-use-minimal-apis-over-controllers-new-apps
2•PretzelFisch•29m ago•0 comments

Does anyone run ads successfully?

2•XCSme•29m ago•3 comments

GNU C Library Sees Up to 12.9x Improvement with New Generic FMA Implementation

https://www.phoronix.com/news/Glibc-New-Generic-FMA
1•Bender•31m ago•0 comments

I was left hospitalized and coughing up blood after using a glass straw

https://www.dailymail.co.uk/health/article-15325379/glass-straws-accident-TikTok-blood-stomach.html
4•Bender•32m ago•0 comments

Show HN: Runprompt – run .prompt files from the command line

https://github.com/chr15m/runprompt
4•chr15m•34m ago•0 comments

Show HN: Alt – A local AI lecture/meeting notetaker

https://www.altalt.io/en
3•predict-woo•37m ago•1 comments

Show HN: SceenYou.art-Your Personal AI Visual Studio

https://sceneyou.art/
2•zy5a59•37m ago•0 comments

Neuracore raises $3M to power next-gen robots and open robotics research

https://earlybird.com/perspectives/backing-neuracore-reinventing-data-infrastructure-for-robotics
2•felixneuraco•37m ago•0 comments

Equal things that don't look equal

https://www.johndcook.com/blog/2025/11/27/hyperbolic-metric-formulas/
3•ibobev•38m ago•0 comments

Launch: Rivellium – AI-powered multi-asset investing with real SMB cashflow

2•rivellium•38m ago•1 comments

Four-inch worm hatches in woman's forehead, wriggles to her eyelid

https://arstechnica.com/health/2025/11/doctors-pull-4-inch-worm-out-of-womans-eyelid-after-monthl...
4•Bender•38m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•6mo ago

Comments

kzawpl•6mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•6mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/