frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

PewDiePie trains his own AI coding model [video]

https://www.youtube.com/watch?v=aV4j5pXLP-I
2•mliezun•3m ago•0 comments

Mnet, a new TCP/IP stack in pure OCaml for unikernels

https://discuss.ocaml.org/t/ann-mnet-a-new-tcp-ip-stack-for-unikernels-in-ocaml/17851
1•dinosaure•5m ago•0 comments

Field Artillery Training. 1914

https://gutenberg.org/cache/epub/78053/pg78053-images.html
1•petethomas•12m ago•0 comments

Open Source Handheld Linux Device [video]

https://www.youtube.com/watch?v=QxqeU8ZfaYg
4•machinehum•17m ago•1 comments

A Living Personal Website

https://www.youtube.com/watch?v=TNnr_Esk3BQ
3•indigodaddy•21m ago•0 comments

Show HN: Inktide – Dead simple reading tracker with fast book search

https://inktide.com/welcome
2•sd9•24m ago•1 comments

MitID, Denmarks sole digital ID, has been down for over an hour and counting

https://www.digitaliser.dk/mitid/nyt-fra-mitid/2026/feb/driftsforstyrrelser-mitid
10•mousepad12•24m ago•1 comments

Best way to hire software developer – a better way to test AI skills

1•reeeesab•27m ago•1 comments

Show HN: My brother and I built a BI tool with zero UI for data consumers

https://bonnard.dev
2•maxmealing•30m ago•3 comments

Show HN: Rynko Playground – 400ms JSON-to-PDF and Excel Engine

https://app.rynko.dev/playground
1•ksrijith•31m ago•0 comments

AI Has a Churn Problem

https://system32.ai/blogs/ai-has-a-churn-problem
2•linuxarm64•32m ago•1 comments

Economists Turned Corporations into Predators (2017)

https://www.ineteconomics.org/perspectives/blog/how-economists-turned-corporations-into-predators
2•robtherobber•33m ago•0 comments

Show HN: Tchop.io – AI-powered community framework

https://tchop.io/
1•HeikoScherer•34m ago•0 comments

P2P Tunnels in IPFS

https://github.com/ipfs/kubo/blob/master/docs/p2p-tunnels.md
4•RobotToaster•35m ago•0 comments

Ask HN: Where do you think the programmers jobs will go?

1•quantum2022•37m ago•0 comments

FAR: Make Every File Readable to AI Coding Agents with Persistent .meta Sidecars

https://github.com/mr-kelly/far
2•chepy•42m ago•0 comments

Show HN: VJam – Browser-based VJ app with 180 beat-reactive visuals

1•infoHiroki•45m ago•0 comments

Standing on the Moon in Japan: Hemp, History, and the Long Game in Japan

https://hightimes.com/travel-hospitality/standing-on-the-moon-in-japan-hemp-history-long-game/
1•keepamovin•47m ago•0 comments

F-Droid Board of Directors nominations 2026

https://f-droid.org/2026/02/26/board-of-directors-nominations.html
7•edent•49m ago•0 comments

Ask HN: Does "task-derived JD and evidence-based candidate" make hiring better?

1•A1aM0•54m ago•0 comments

Show HN: Dypai – Build back ends via MCP

https://www.dypai.ai/
1•lorengarcialv•54m ago•0 comments

Dyson settles forced labour suit in landmark UK case

https://www.bbc.com/news/articles/cddnry8dnl7o
10•cmsefton•55m ago•7 comments

A-Z.md – Where AI Civilization Writes Its History

https://a-z.md/
1•vinciarts•56m ago•0 comments

Show HN: AgentWeb – Free business directory API for AI agents (11M+ businesses)

https://agentweb.live
1•ReidarO•58m ago•0 comments

Wes McKinney – The Mythical Agent-Month

https://wesmckinney.com/blog/mythical-agent-month/
1•rmoff•1h ago•0 comments

Sintropy: Open Data and Community on Carbon Credits and Green Energy Markets

https://sintropy.space/en
1•edrodrigues•1h ago•1 comments

Aromatic 5-silicon rings synthesized at last

https://cen.acs.org/materials/inorganic-chemistry/Aromatic-5-silicon-rings-synthesized/104/web/20...
3•keepamovin•1h ago•0 comments

Say Goodbye to the Undersea Cable That Made the Global Internet Possible

https://www.wired.com/story/say-goodbye-to-the-undersea-cable-that-made-the-global-internet-possi...
4•stiltzkin10•1h ago•1 comments

Hornby sells slot car racing brand Scalextric for £20M

https://www.theguardian.com/business/2026/feb/27/hornby-sells-slot-car-racing-brand-scalextric
1•samizdis•1h ago•0 comments

Governing Autonomous AI Agents in Production

https://sekuire.ai/blog/the-missing-control-layer-for-ai-agents
2•jfngozo•1h ago•1 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•10mo ago

Comments

kzawpl•10mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•10mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/