frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Particle Physicists Detect 'Magic' at the Large Hadron Collider

https://www.quantamagazine.org/particle-physicists-detect-magic-at-the-large-hadron-collider-2025...
1•tzury•42s ago•0 comments

White House gives Maduro ultimatum as U.S. moves toward land operations

https://www.miamiherald.com/news/nation-world/world/americas/venezuela/article313261442.html
1•clanky•1m ago•0 comments

Switzerland votes decisively against inheritance tax

https://www.economist.com/europe/2025/11/30/switzerland-votes-decisively-against-inheritance-tax
1•vinni2•3m ago•0 comments

Show HN: Generate Storyboards with Nano Banana from the CLI

https://github.com/kierangilliam/storyboard
2•kierangill•3m ago•0 comments

Who Will Observe the Observability? eBPF Performance at Scale

https://blog.zmalik.dev/p/who-will-observe-the-observability
1•tanelpoder•3m ago•0 comments

Why Staff+ Hiring Is a Different Game

https://medium.com/@yves.greijn_19041/fcb10ed6e880
1•hunglee2•4m ago•0 comments

Volkswagen can now build EVs in China, claiming it can cut costs by up to 50%

https://electrek.co/2025/11/25/volkswagen-build-evs-china-cut-costs-by-50/
2•ilamont•5m ago•1 comments

Plans for MySQL Vector Support and a MySQL Binlog Server

https://www.percona.com/blog/building-the-future-of-mysql-announcing-plans-for-mysql-vector-suppo...
1•tanelpoder•5m ago•0 comments

Show HN: GoodQuestions – a tiny site of genuinely good, human-curated questions

https://goodquestions.qzz.io/
1•juliakzl_•6m ago•0 comments

Lightyear.fm – radio waves far from Earth

https://lightyear.fm/
1•memalign•7m ago•0 comments

Waze but Built for Tesla

https://old.reddit.com/r/TeslaLounge/comments/1p9x9zk/i_created_a_better_inbrowser_tesla_waze_map...
1•ryanvogel•7m ago•0 comments

Norway's $2T Wealth Fund Has Become an Election Football

https://www.bloomberg.com/news/articles/2025-09-04/norway-election-trump-ally-takes-on-world-s-bi...
1•alephnerd•18m ago•0 comments

Building the Perfect Linux PC with Linus Torvalds

https://youtu.be/mfv0V1SxbNA?si=ASyHL7YiMtdOCVen
6•tiernano•21m ago•0 comments

Hacking on the ReMarkable 2

https://sgt.hootr.club/blog/hacking-on-the-remarkable-2/
1•todsacerdoti•31m ago•0 comments

By my count, Linux has 11% of the desktop market. Here's how I got that number

https://www.zdnet.com/article/why-people-keep-flocking-to-linux-in-2025-and-its-not-just-to-escap...
9•breve•33m ago•0 comments

Subversion beats Perforce in handling large files, and it's not even close

https://www.liamfoot.com/subversion-beats-perforce-in-handling-large-files-and-its-not-even-close
2•prmph•36m ago•1 comments

Kv.js: Advanced in-memory caching for JavaScript

https://www.npmjs.com/package/@heyputer/kv.js
1•ent101•38m ago•0 comments

Reverse Engineering the Next.js Job Interview Malware (Hidden in Next.config.js)

https://dzentota.medium.com/reverse-engineering-the-next-js-job-interview-malware-targeting-lastp...
2•dzentota•39m ago•1 comments

Oxylipins from Soybean Oil Driving Obesity

https://www.jlr.org/article/S0022-2275(25)00195-6/fulltext
1•Noaidi•40m ago•0 comments

Dangerous Streets: Using ML to Prioritize Cyclist Safety

https://joshfonseca.com/blogs/dangerous-streets
2•m-hodges•40m ago•0 comments

$1000 bounty to add a feature to coolify

https://github.com/coollabsio/coolify/issues/7423
3•jimmydin7•41m ago•0 comments

Golden Dome (orbital weapon system)

https://en.wikipedia.org/wiki/Golden_Dome_(missile_defense_system)
2•exomonk•44m ago•0 comments

GhidrAssist and GhidrAssistMCP LLM plugins reached v1.0

2•jtang613•44m ago•0 comments

Training Foundation Models on a Full-Stack AMD Platform

https://arxiv.org/abs/2511.17127
1•ngaut•44m ago•0 comments

Can bigger-is-better 'scaling laws' keep AI improving forever?

https://theconversation.com/can-bigger-is-better-scaling-laws-keep-ai-improving-forever-history-s...
6•devonnull•46m ago•0 comments

I can't tell if this photo is real or AI and that terrifies me

https://twitter.com/immasiddx/status/1992979078220263720
3•bakigul•48m ago•2 comments

AI rendering of Roman war scenes from Trajan's Column

https://trajancolumn.com
1•unix-junkie•48m ago•0 comments

Show HN: Sportfoli – A Simple, Clean Sports Profile Builder for Athletes

https://www.sportfoli.com/
1•ethjdev•49m ago•0 comments

Mystery foot belongs to ancient human relative

https://www.france24.com/en/live-news/20251127-mystery-foot-belongs-to-ancient-human-relative-sci...
1•gmays•51m ago•0 comments

Show HN: Boing #2

https://boing.playcode.io
2•ianberdin•54m ago•1 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/