frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Study: Self-generated Agent Skills are useless

https://arxiv.org/abs/2602.12670
159•mustaphah•2h ago•70 comments

14-year-old Miles Wu folded origami pattern that holds 10k times its own weight

https://www.smithsonianmag.com/innovation/this-14-year-old-is-using-origami-to-design-emergency-s...
333•bookofjoe•5h ago•67 comments

Show HN: Scanned 1927-1945 Daily USFS Work Diary

https://forestrydiary.com/
11•dogline•18m ago•1 comments

Show HN: Free Alternative to Wispr Flow, Superwhisper, and Monologue

https://github.com/zachlatta/freeflow
54•zachlatta•2h ago•29 comments

Running NanoClaw in a Docker Shell Sandbox

https://www.docker.com/blog/run-nanoclaw-in-docker-shell-sandboxes/
26•four_fifths•1h ago•4 comments

Show HN: Wildex – we built Pokémon Go for real wildlife

https://apps.apple.com/us/app/wildex-identify-plants-animals/id6748092158
55•AnujNayyar•2h ago•39 comments

Testing Postgres race conditions with synchronization barriers

https://www.lirbank.com/harnessing-postgres-race-conditions
44•lirbank•3h ago•19 comments

Suicide Linux (2009)

https://qntm.org/suicide
66•icwtyjj•3h ago•41 comments

Visual Introduction to PyTorch

https://0byte.io/articles/pytorch_introduction.html
88•0bytematt•3d ago•11 comments

What your Bluetooth devices reveal

https://blog.dmcc.io/journal/2026-bluetooth-privacy-bluehood/
270•ssgodderidge•9h ago•105 comments

PascalABC.net

https://pascalabc.net:443/en
16•andsoitis•2d ago•2 comments

Turing Labs (YC W20) Is Hiring – Founding GTM Sales Hacker

1•turinglabs•2h ago

PCB Rework and Repair Guide [pdf]

https://www.intertronics.co.uk/wp-content/uploads/2017/05/PCB-Rework-and-Repair-Guide.pdf
75•varjag•2d ago•22 comments

Camera that captures photos to cassette tape

https://hackaday.io/project/205004-digital-analog-tape-picture-camera
30•Jun8•5d ago•2 comments

State of Show HN: 2025

https://blog.sturdystatistics.com/posts/show_hn/
50•kianN•4h ago•7 comments

Show HN: Jemini – Gemini for the Epstein Files

https://jmail.world/jemini
223•dvrp•18h ago•43 comments

LCM: Lossless Context Management [pdf]

http://papers.voltropy.com/LCM
17•ClintEhrlich•5h ago•11 comments

Show HN: Maths, CS and AI Compendium

https://github.com/HenryNdubuaku/maths-cs-ai-compendium
47•HenryNdubuaku•8h ago•13 comments

Show HN: 2D Coulomb Gas Simulator

https://simonhalvdansson.github.io/2D-Coulomb-Gas-Tools/index_gpu.html
25•swesnow•4h ago•5 comments

Neurons outside the brain

https://essays.debugyourpain.com/p/you-are-not-just-your-brain
41•yichab0d•5h ago•18 comments

Qwen3.5: Towards Native Multimodal Agents

https://qwen.ai/blog?id=qwen3.5
363•danielhanchen•14h ago•173 comments

Rise of the Triforce

https://dolphin-emu.org/blog/2026/02/16/rise-of-the-triforce/
13•max-m•2h ago•1 comments

10 years building vertical software: are we cooked?

https://twitter.com/nicbstme/status/2023501562480644501
25•nbstme•2h ago•24 comments

The long tail of LLM-assisted decompilation

https://blog.chrislewis.au/the-long-tail-of-llm-assisted-decompilation/
34•knackers•5h ago•9 comments

Ghidra by NSA

https://github.com/NationalSecurityAgency/ghidra
297•handfuloflight•2d ago•166 comments

Chiplets Get Physical: The Days of Mix-and-Match Silicon Draw Nigh

https://www.eejournal.com/article/chiplets-get-physical-the-days-of-mix-and-match-silicon-draw-nigh/
17•transpute•2d ago•11 comments

Privilege is bad grammar

https://tadaima.bearblog.dev/privilege-is-bad-grammar/
170•surprisetalk•5h ago•168 comments

Building a model that visualizes strategic golf

https://golfcoursewiki.substack.com/p/i-spent-the-last-month-and-a-half
7•scoofy•6h ago•2 comments

How to take a photo with scotch tape (lensless imaging) [video]

https://www.youtube.com/watch?v=97f0nfU5Px0
87•surprisetalk•7h ago•4 comments

WebMCP Proposal

https://webmachinelearning.github.io/webmcp/
126•Alifatisk•6h ago•66 comments
Open in hackernews

LCM: Lossless Context Management [pdf]

http://papers.voltropy.com/LCM
17•ClintEhrlich•5h ago

Comments

ClintEhrlich•1h ago
Hi, I'm Clint, one of the co-authors of this paper.

I'd like to quickly summarize what is different about our approach and why it matters.

Our work was inspired by brilliant research done at MIT CSAIL on "Recursive Language Models" (RLMs). One of the controversies has been whether these models are just a formalization of what agents like Claude Code already do vs. whether they bring new capabilities to the table.

By outperforming Claude on the major long-context benchmark, we provide a strong signal that something fundamentally new is happening. (In other words, it's not "just Claude Code" because it demonstrably outperforms Claude Code in the long-context regime.)

Where our contribution, LCM, differs from RLMs is how we handle recursion. RLMs use "symbolic recursion" -- i.e., they have an LLM write a script to recursively call itself in order to manipulate the context, which is stored in a REPL. This provides maximum flexibility... but it often goes wrong, since the LLM may write imperfect scripts.

LCM attempts to decompose the recursion from RLMs into deterministic primitives so that the control flow can be managed by an engine rather than left to the whims of the LLM. In practice, this means we replace bespoke scripts with two mechanisms: (1) A DAG-based context management system that works like paged virtual memory, except for managing conversations and files; and (2) Operator-level recursion, like "Map" for LLMs, which lets one tool call process thousands of tasks.

An analogy we draw in the paper is the evolution from GO-TO statements (of Dijkstra's "Considered Harmful" fame) to structured programming. RLMs are maximally expressive, but all of that power comes with the risk of things going awry. We have built a more mechanistic system, which can provide stronger guarantees when deployed in production with today's models.

Happy to answer any questions! Thanks for taking a look at the paper!

vessenes•1h ago
This looks super useful! And it’s intellectually appealing to think that the LLM will have the ability to think back precisely and we can rely on DAG tooling to reason about and keep track of history (and correct history).

Have you considered making an openclaw plugin/PR for it? I understand you have your own coding CLI tool, but I don’t think this looks so hard to implement that it can’t be implemented elsewhere.

Either way, thanks for sharing this.

ClintEhrlich•1h ago
Yes, that is actually the next thing we are shipping!

We have heard from a ton of OpenClaw users that the biggest barrier to them getting everything they want out of their agents is that memory is not a solved problem.

LCM could be a great solution to that. Stay tuned -- will ship it ASAP.

vessenes•57m ago
Love it. Yes, compaction is a huge pain point in openclaw, and it EATS tokens.
vessenes•33m ago
Riffing on this a little, there’s a few things that would be useful:

1 - global namespace - for the gateway agent/coordinator - would make inspecting results of subagent tasks much more safe and efficient, and all the benefits of precision across compaction boundaries for the main chat thread. I could see giving the subagents access to it, or just prompting them fresh and storing results in the global memory - probably the second is better.

2 - permissioned memory spaces - stuff that a given subagent should know without giving them global memory access. Then a gateway could mark some stuff ‘available’ as part of prompting.

This would be a super useful set of primitives - from reading the paper, I think you could do this relatively cheaply, maybe a tagging system for branches/nodes in the DAG. openclaw keeps some sort of track of what subagents should have access to already in the form of skills, but I haven’t looked into the actual permissions architecture.

ClintEhrlich•29m ago
Just passed this on to my co-author who is working on the plug-in. Really appreciate the suggestions!

We will probably ship a fairly basic version to start, but I think there are a lot of cool things that can be added.

jorl17•1h ago
Thank you so much for your work!

I've echoed the sentiment here on HN (and elsewhere) that these kinds of mechanisms seem to be a pathway to extending context longer and longer and longer and I wish I could toy around with this technology right now (can I?). I'm so excited!!

Your work is the shoulders-built-on-shoulders upon which other giants shall keep on building. Thank you so much.

ClintEhrlich•58m ago
Thanks for the kind words.

Yes, we think there is a ton of low-hanging fruit from taking lessons from OS/PL theory and applying them to LLM tooling.

This is our first contribution in that direction. There will be more!

ClintEhrlich•50m ago
Oh and to be clear YES you can try it!!!

Just bring an API key. :)

github.com/voltropy/volt

quotemstr•19m ago
Cool. I agree (consistent with your GOTO analogy) that imposing structure on the model (or a human) can constrain the search space and lead to better choosing given a fixed decision budget.

> deterministic primitives

Are agent-map and LLM-map the only two options you've given the model for recursive invocations? No higher-level, er, reduction operators to augment the map primitives?

belisarius222•6m ago
Hi, I'm the other author on this paper. You've asked a good question. I had originally planned on writing an agentic_reduce operator to complement the agentic_map operator, but the more I thought about it, the more I realized I couldn't come up with a use case for it that wasn't contrived. Instead, having the main agent write scripts that perform aggregations on the result of an agentic_map or llm_map call made a lot more sense.

It's quite possible that's wrong. If so, I would write llm_reduce like this: it would spawn a sub-task for every pair of elements in the list, which would call an LLM with a prompt telling it how to combine the two elements into one. The output type of the reduce operation would need to be the same as the input type, just like in normal map/reduce. This allows for a tree of operations to be performed, where the reduction is run log(n) times, resulting in a single value.

That value should probably be loaded into the LCM database by default, rather than putting it directly into the model's context, to protect the invariant that the model should be able to string together arbitrarily long sequences of maps and reduces without filling up its own context.

I don't think this would be hard to write. It would reuse the same database and parallelism machinery that llm_map and agentic_map use.