frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Please give some tips to shorten the Linux device name of NVMe (2023)

https://old.reddit.com/r/linuxquestions/comments/1biouhv/please_give_some_tips_to_shorten_the_nam...
1•transpute•2m ago•0 comments

Infinite Ethics [pdf]

https://nickbostrom.com/ethics/infinite.pdf
1•ChadNauseam•3m ago•0 comments

Dad's Fitness May Be Packaged and Passed Down in Sperm RNA

https://www.quantamagazine.org/how-dads-fitness-may-be-packaged-and-passed-down-in-sperm-rna-2025...
1•ibobev•3m ago•0 comments

Vince Zampella, Developer of Call of Duty and Battlefield, Dead at 55

https://comicbook.com/gaming/news/vince-zampella-developer-of-call-of-duty-and-battlefield-dead-a...
2•superpupervlad•8m ago•0 comments

Glia-associated amyloid β oligomer subtype and rescue from reactive astrogliosis

https://alz-journals.onlinelibrary.wiley.com/doi/10.1002/alz.70968
1•bookofjoe•10m ago•0 comments

I built a retro mini PC which uses game cartridges [video]

http://youtube.com/watch?v=iJbJDowBfi4
2•abeisgreat•12m ago•1 comments

Genesis Open Source Embodied AGI Simulation, Rust (Mamba-3, Not Transformers)

1•RGBra•14m ago•0 comments

Reddit comment led police to identify Brown University shooter

https://9to5mac.com/2025/12/22/reddit-comment-led-police-to-identify-brown-university-shooter/
1•akyuu•15m ago•0 comments

Free to Post, Impossible to Hide: The End of Anonymous Marketplaces

https://medium.com/@lea.leumassart/free-to-post-impossible-to-hide-the-end-of-anonymous-marketpla...
1•pbacdf•16m ago•0 comments

Meilisearch: Make the S3-streaming snapshots an Enterprise Edition feature

https://github.com/meilisearch/meilisearch/pull/6057
1•iruoy•18m ago•0 comments

How AI Collapses and Rebuilds Marketplace Moats

https://www.caseyaccidental.com/p/when-agents-attack-how-ai-collapses
1•gmays•19m ago•0 comments

Banana is Generating Antimatter, And I Detected It [video]

https://www.youtube.com/watch?v=ZOTsDmeM0No
2•jmward01•20m ago•1 comments

Show HN: Cardly – a tiny card-first app to capture people's Gift Cards

https://www.cardlyai.app/
1•Pastaza•20m ago•1 comments

Show HN: Making SVG Sparkline Component with an Agent to Graph Token Usage

https://bsky.app/profile/verdverm.com/post/3makhu3nbm22n
1•verdverm•20m ago•0 comments

Show HN: Find games with few -but positive- reviews based on games that you like

https://www.notsoaaa.com/
1•AmbroseBierce•24m ago•0 comments

Show HN: LLVM-jutsu: Anti-LLM obfuscation pass

https://github.com/thebabush/llvm-jutsu
1•babush•24m ago•0 comments

Welcome to Kenya's Great Carbon Valley a bold new gamble to fight climate change

https://www.technologyreview.com/2025/12/22/1130153/geothermal-energy-carbon-capture-kenya-climat...
1•rbanffy•24m ago•0 comments

How the Cybertruck's design may have trapped crash survivors in flames

https://www.washingtonpost.com/technology/interactive/2025/cybertruck-crash-design-lawsuit/
2•Jtsummers•24m ago•0 comments

Paperbacks and TikTok

https://calnewport.com/on-paperbacks-and-tiktok/
1•zdw•25m ago•0 comments

Lua 5.5 Released

https://www.lua.org/manual/5.5/readme.html#changes
2•todsacerdoti•25m ago•1 comments

Best way to annotate large parquet LLM logs without full rewrites?

1•platypii•28m ago•0 comments

The Program 2025 annual review: How much money does an audio drama podcast make?

https://programaudioseries.com/the-program-results-7/
2•I-M-S•28m ago•1 comments

ChatGPT Is a Search Engine

https://queryburst.com/blog/how-chatgpt-works/
1•AznHisoka•29m ago•0 comments

Power outage paralyzes Waymo robotaxis when traffic lights go out

https://arstechnica.com/cars/2025/12/power-outage-paralyzes-waymo-robotaxis-when-traffic-lights-g...
2•chirau•30m ago•1 comments

Reducing contrails reduces CO2 effect of air travel 73%, adds only 0.08% to cost [video]

https://www.youtube.com/watch?v=QoOVqQ5sa08
1•CGMthrowaway•34m ago•0 comments

Algorithmic Personalization Causes Inaccurate Generalization and Overconfidence

https://psycnet.apa.org/fulltext/2026-31272-001.html
1•PaulHoule•34m ago•0 comments

Inquiry ongoing after UK government hacked, says minister

https://www.bbc.co.uk/news/articles/cj4qpwprw9vo
1•GaryBluto•34m ago•0 comments

Older Americans Quit Weight-Loss Drugs in Droves

https://www.nytimes.com/2025/12/21/health/older-people-glp1-weight.html
3•bookofjoe•35m ago•1 comments

Samsung Biologics to buy US drug production facility from GSK for $280M

https://www.reuters.com/business/healthcare-pharmaceuticals/samsung-biologics-buy-us-drug-product...
2•randycupertino•36m ago•0 comments

The Fisherman and the Businessman

https://kevquirk.com/blog/the-fisherman-and-the-businessman/
1•0x54MUR41•36m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/