frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Phonetic Formatter – offline English text to IPA on iPhone and iPad

https://apps.apple.com/us/app/phonetic-formatter-english/id6757941187
1•louischen•12s ago•0 comments

Show HN: Economic growth is a power law

https://julienreszka.github.io/economic-simulator/armey-curve.html
2•julienreszka•6m ago•0 comments

Why C Remains the Gold Standard for Cryptographic Software

https://www.wolfssl.com/why-c-remains-the-gold-standard-for-cryptographic-software/
2•LinuxJedi•8m ago•1 comments

40 Years Ago, a Nuclear Catastrophe at Chernobyl

https://www.nytimes.com/2026/04/26/world/europe/40-years-ago-a-nuclear-catastrophe-at-chernobyl.html
2•HelloUsername•9m ago•0 comments

Codex MSN Interface

https://codexmessenger.net/
1•blef•14m ago•0 comments

Headless websites and the cost of engineering vanity

https://www.jonoalderson.com/conjecture/headless-websites/
1•misone•15m ago•0 comments

Quick tutorial to get a blog online from Org Mode thanks to Org Social

https://en.andros.dev/blog/c68f00c3/quick-tutorial-to-get-a-blog-online-from-org-mode-thanks-to-o...
1•andros•16m ago•0 comments

APL is more French than English

https://www.jsoftware.com/papers/perlis78.htm
2•tosh•17m ago•0 comments

The Knight Programming Language

https://github.com/knight-lang/knight-lang/tree/master
2•tosh•19m ago•0 comments

Exposing Floating Point – Bartosz Ciechanowski

https://ciechanow.ski/exposing-floating-point/
2•subset•22m ago•0 comments

Seven database engines in a single Rust binary

https://github.com/nodeDB-Lab/nodedb
1•mansarip•26m ago•0 comments

Tip: Web requests should not be measured in Hz [Hertz]

https://mastodon.catgirl.cloud/@sophie/116467789133733136
1•robin_reala•28m ago•0 comments

Self-Updating Screenshots

https://interblah.net/self-updating-screenshots
1•bjhess•40m ago•1 comments

Open grid data has a public benefit

https://nworbmot.org/blog/open-grid-data.html
2•lyoncy•41m ago•0 comments

Airprompt – SSH into your Mac from your phone for AI agent prompts

https://www.npmjs.com/package/airprompt
2•hatefrad•43m ago•1 comments

Show HN: A community powered global network of probes

https://github.com/jsdelivr/globalping
1•jimaek•45m ago•0 comments

The Scrum-to-POM Transition Is a Role Repositioning Event

https://age-of-product.com/scrum-to-pom-transition/
1•swolpers•47m ago•0 comments

Pytest-cloudreport – local HTML reports and flaky-test detection for pytest

https://github.com/ahmad212o/pytest-cloudreport
1•ahmad212o•48m ago•0 comments

Blueprint: AI Hardware Design

https://www.blueprint.am/
1•handfuloflight•51m ago•0 comments

US is making Europe pay dearly for its half-hearted electrification

https://www.programmablemutter.com/cp/195461224
2•hackandthink•53m ago•0 comments

The reporters at this news site are AI bots. OpenAI's super PAC is funding it

https://twitter.com/TheMidasProj/status/2047692328396034490
1•pretext•58m ago•0 comments

San Francisco must preserve the birthplace of the Mission burrito

https://www.sfchronicle.com/food/restaurants/article/el-faro-mission-burrito-creator-22206173.php
3•divbzero•58m ago•0 comments

Enterprises Are Rethinking Kubernetes

https://www.infoworld.com/article/4161056/enterprises-are-rethinking-kubernetes.html
3•milkglass•1h ago•0 comments

Talk a stranger for fun or everything else

https://bakbak.fun/
3•chintan39•1h ago•1 comments

The West Forgot How to Make Things. Now It's Forgetting How to Code

https://techtrenches.dev/p/the-west-forgot-how-to-make-things
91•milkglass•1h ago•36 comments

The Coding Assistant Breakdown: More Tokens Please

https://newsletter.semianalysis.com/p/the-coding-assistant-breakdown-more
1•gmays•1h ago•0 comments

WTF Are Metaballs?

https://www.youtube.com/watch?v=LW03EEKjy9o
2•gdubs•1h ago•3 comments

Iran war hits Dubai chocolate pistachio supplies

https://www.ft.com/content/438ef32a-59e5-41b3-a0da-569716385347
1•KnuthIsGod•1h ago•0 comments

CO operating system age-verification open-source exemption doesn't include Linux

https://twitter.com/LundukeJournal/status/2048199650117554678
6•gasull•1h ago•0 comments

Why Rome Never Industrialized [video]

https://www.youtube.com/watch?v=uR8-AF6NJcc
2•Khaine•1h ago•1 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•11mo ago

Comments

kzawpl•11mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•11mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/