frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

U.S. to Withdraw 5k Troops from Germany, Pentagon Says

https://www.nytimes.com/2026/05/01/us/politics/us-troops-germany.html
2•mikhael•4m ago•0 comments

uget – stupid get-file-over-HTTP program/function

https://github.com/troglobit/uget
1•peter_d_sherman•6m ago•0 comments

Visual Studio 2026 still ships the form designer Alan Cooper drew in 1987

https://evilgeniuslabs.ca/blog/winforms-still-ships-in-visual-studio-2026
2•jordand•12m ago•0 comments

Oregon's Non-Affiliated Surge and the Socialist Realignment Nobody Talks About

https://fullstack.ing/posts/the-flight-from-party-oregons-non-affiliated-surge-and-the-socialist-...
2•fullstacking•20m ago•0 comments

Humanity on the Page

https://www.commonwealmagazine.org/writing-artificial-intelligence-ai-rand-richards-cooper
1•cainxinth•20m ago•0 comments

Show HN: TTS Studio: AI-Powered Text-to-Speech Tool

https://tts.haroun.dev/
2•shmayro•21m ago•0 comments

I got infected with a crypto-miner via misconfigured qBittorrent

https://blog.vasi.li/well-i-got-hacked/
2•vsviridov•24m ago•0 comments

What Software Engineers Can Learn from the Aviation Industry

https://mwalterskirchen.dev/blog/piloting-agentic-engineering/
2•pseudolus•29m ago•0 comments

NASA's Curiosity and Perseverance rovers capture Mars panoramas [video]

https://www.space.com/astronomy/mars/nasas-curiosity-and-perseverance-rovers-capture-sweeping-mar...
2•teleforce•32m ago•0 comments

A Report on Burnout in Open Source Software Communities (2025) [pdf]

https://mirandaheath.website/static/oss_burnout_report_mh_25.pdf
3•susam•32m ago•0 comments

New v2 UALink specification aims to catch up to NVLink

https://www.networkworld.com/article/4155357/new-v2-ualink-specification-aims-to-catch-up-to-nvli...
2•mindcrime•32m ago•0 comments

Keep Android Open: Why Free Android Matters

https://tux.re/forum/viewtopic.php?t=203
4•tux033•33m ago•0 comments

On Taste

https://endler.dev/2026/taste/
2•lwhsiao•37m ago•0 comments

Palantir Workers Are Finally Noticing the Skulls on Their Caps

https://www.techdirt.com/2026/04/30/palantir-workers-are-finally-noticing-the-skulls-on-their-caps/
8•throawayonthe•38m ago•3 comments

WolfCOSE: Zero alloc, PQC, MISRA-C, FIPS 140-3 built with wolfCrypt

https://github.com/aidangarske/wolfCOSE
2•aidangarske•38m ago•0 comments

AI Companies Can't Regulate Themselves. They Should Regulate Each Other

https://www.lawfaremedia.org/article/ai-companies-can-t-regulate-themselves-they-should-regulate-...
1•nedruod•40m ago•0 comments

Pentagon officials broadly detail $55B drone plan under DAWG

https://breakingdefense.com/2026/04/pentagon-officials-broadly-detail-55-billion-drone-plan-under...
1•thegdsks•40m ago•0 comments

Show HN: News on the Go

https://hncast.com/
2•ynarwal__•41m ago•0 comments

Industrial Policy for the Intelligence Age [pdf]

https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%2...
1•avaer•46m ago•0 comments

A Programmer's Guide to Common Lisp

https://archive.org/details/a-programmers-guide-to-common-lisp
8•jellinek•48m ago•1 comments

Running a custom trained Piper TTS model on Raspberry Pi Zero 2W

https://old.reddit.com/r/LocalLLM/comments/1t0xho8/running_a_custom_trained_piper_tts_model_on/
1•yakkomajuri•49m ago•0 comments

Disabling the new AF_ALG by default in gnulib (from 2018)

https://lists.gnu.org/archive/html/coreutils/2018-06/msg00034.html
2•dxdxdt•49m ago•0 comments

Copy-Fail: Linux Privilege Escalation

https://copy.fail/#affected
2•joatmon-snoo•50m ago•1 comments

Bitcoin Is Venice (2021)

https://allenfarrington.medium.com/bitcoin-is-venice-bitcoin-is-741cc7d22e9
1•simonebrunozzi•50m ago•0 comments

Active exploitation of cPanel/WHM critical vulnerability

https://www.cyber.gov.au/about-us/view-all-content/alerts-and-advisories/active-exploitation-of-c...
1•Svoka•50m ago•0 comments

'Empire of Skulls' book review: When phrenology raced ahead

https://www.wsj.com/arts-culture/books/empire-of-skulls-review-when-phrenology-raced-ahead-1c1fdab0
4•hhs•51m ago•1 comments

Is Rise of the Robots (1994) the worst game?

https://old.reddit.com/r/amiga/comments/1t1407x/is_rise_of_the_robots_1994_actually_the_worst/
1•doener•51m ago•0 comments

Ask HN: Any nice project ideas that you know you'll never bring to life

1•atilimcetin•52m ago•1 comments

New study finds task switching raises risk in transplant surgeries

https://news.vt.edu/articles/2026/04/pamplin-bit-research-organ-transplant-task-switching.html
1•hhs•55m ago•0 comments

GameStop is preparing offer for eBay

https://finance.yahoo.com/markets/stocks/articles/gamestop-preparing-offer-ebay-wsj-212703455.html
2•avonmach•58m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•12mo ago

Comments

kzawpl•12mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•12mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/