frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

KDE going all-in on a Wayland future

https://blogs.kde.org/2025/11/26/going-all-in-on-a-wayland-future/
1•dualogy•3m ago•0 comments

Corecore

https://en.wikipedia.org/wiki/Corecore
1•pizza•3m ago•0 comments

Garfield's Proof of the Pythagorean Theorem

https://en.wikipedia.org/wiki/Garfield%27s_proof_of_the_Pythagorean_theorem
1•benbreen•4m ago•0 comments

Pat Gelsinger: 'I've been called here for a purpose'– Lunch With the FT

https://on.ft.com/3LZnfeq
2•microsoftedging•23m ago•0 comments

UAPs as Coherent Field Entities

https://abacusnoir.com/2025/11/29/field-entities-not-craft/
1•agambrahma•30m ago•1 comments

Planes grounded after Airbus discovers solar radiation could impact systems

https://www.bbc.com/news/articles/c8e9d13x2z7o
2•djtango•31m ago•1 comments

Engineering.fyi

https://www.engineering.fyi/
1•Kinrany•41m ago•0 comments

Cutting C++ Exception Time by +90%? – Khalil Estell – CppCon 2025 [video]

https://www.youtube.com/watch?v=wNPfs8aQ4oo
2•aw1621107•56m ago•0 comments

Plastic can be programmed to have a lifespan of days, months or years

https://www.newscientist.com/article/2506104-plastic-can-be-programmed-to-have-a-lifespan-of-days...
7•thunderbong•1h ago•0 comments

Shopify pulls off real time 8K browser rendering on the Exosphere

https://twitter.com/pushmatrix/status/1994445450527519085
4•dsr12•1h ago•0 comments

The 'S&P 493' reveals a different U.S. economy

https://www.msn.com/en-us/money/markets/the-s-p-493-reveals-a-very-different-us-economy/ar-AA1R1VUJ
18•MilnerRoute•1h ago•1 comments

It's time for our own Space Age

https://www.thomasmoes.com/52obsessions/its-time-for-our-own-space-age
1•freediver•1h ago•0 comments

Show HN: ChikkaDB – A Translation Layer to Use SQLite as a JSON Database

https://github.com/athkishore/chikkadb-ts
2•athkishore•1h ago•0 comments

From Cells to Selves

https://aeon.co/essays/why-you-need-your-whole-body-from-head-to-toes-to-think
2•the-mitr•1h ago•0 comments

The Real-Life Hunt for Red October Happened 50 Years Ago

https://www.twz.com/sea/the-real-life-hunt-for-red-october-happened-50-years-ago
3•NewCzech•1h ago•0 comments

AI Companions shape socio-emotional learning and metacognitive development

https://link.springer.com/article/10.1007/s00146-025-02737-5
3•bettik•1h ago•1 comments

Seven years later, Airbus is still trying to kick its Microsoft habit

https://www.theregister.com/2025/11/26/microsoft_airbus_migration/
4•tbakker•1h ago•0 comments

All the Way Down

https://www.futilitycloset.com/2025/11/17/all-the-way-down-2/
1•surprisetalk•1h ago•0 comments

Wacky Fun Physics Ideas

https://scottlocklin.wordpress.com/2025/11/22/wacky-fun-physics-ideas/
3•surprisetalk•1h ago•0 comments

The Great Downzoning

https://worksinprogress.co/issue/the-great-downzoning/
2•barry-cotter•1h ago•0 comments

Proposing a New Cognitive Constant (Ca) with Full Math and Open Dataset

https://zenodo.org/records/17718241
2•Harry_Yoo•1h ago•1 comments

'Good Boy' Star Indy the Dog Becomes the First Animal Nominated for a Film Award

https://www.yahoo.com/entertainment/movies/articles/good-boy-star-indy-dog-180718575.html
1•thunderbong•1h ago•0 comments

Adventures with Chimera Linux

https://blog.xiaket.org/2025/chimera.html
4•todsacerdoti•1h ago•0 comments

Show HN: New VSCode extension: Objectify Params

https://marketplace.visualstudio.com/items?itemName=eridien.objectify-params
2•mchahn•1h ago•0 comments

Popping-and-Locking-Zed-Theme

https://github.com/randoneering/popping-and-locking-zed-theme
1•todsacerdoti•1h ago•0 comments

Understanding copy-on-write: why Redis needs memory overcommit

https://frn.sh/posts/cow/
3•shellpipe•1h ago•0 comments

Careless Whisper: Silently Monitoring Users on Mobile Instant Messengers

https://arxiv.org/abs/2411.11194
5•wakawaka28•2h ago•1 comments

An ancient foot reveals a hidden human cousin

https://www.sciencedaily.com/releases/2025/11/251128050512.htm
3•ashishgupta2209•2h ago•0 comments

Surely You're Joking, Mr. Feynman

https://en.wikipedia.org/wiki/Surely_You%27re_Joking,_Mr._Feynman!
2•nomilk•2h ago•0 comments

Tim Cook says he uses an iMac G4 as a monitor (2024)

https://www.theverge.com/2024/10/22/24276142/tim-cook-wsj-interview-every-apple-product-every-day
2•uneven9434•2h ago•1 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/