frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A three-layer memory architecture for long-running agents

1•mvyshnyvetska•31m ago
Anthropic's recent piece on effective harnesses for long-running agents hit close to home. We've been wrestling with the same problems — agents that try to one-shot everything, declare victory prematurely, and leave chaos for the next session to clean up. But we solved some of these problems differently. Here's what's working for us, what isn't yet, and where we respectfully disagree with the proposed solutions.

The Memory Problem: Three Layers Beat One Anthropic's solution is a progress.txt file plus git history. It works, but it's flat. We use three layers instead: Layer 1: Model actualization. A semantic memory system that helps the orchestrating agent understand "what are we building and why." This is the soft layer. Layer 2: Think Jira meets Git, but for AI agents. Structured storage of tasks with metadata: blockers, decision paths, dependencies, progress state. The agent doesn't just know what to do next — it understands the logic of how we got here and where we're going. Layer 3: Git. Non-rotting charm of the classic version control. The key insight: separating "understanding" from "tracking" from "versioning" reduces cognitive load on the agent.

On Premature Victory: Prompt Engineering > Programmatic Constraints Anthropic's approach to the "agent declares victory too early" problem is a JSON file with passes: true/false flags and strongly-worded instructions not to edit it. Our approach: make the supervising agent responsible for proper task breakdown into what we call atomic structures — concrete enough to be unambiguous, but not so detailed that they micromanage the implementation. Completion criteria live in the sub-agent's prompt, not in the task definition. The sub-agent knows: tests must pass, lint must be clean, migrations applied. The supervising agent doesn't repeat this for every task — it's baked into how the sub-agent operates. Yes, this requires better prompts. But it also produces more robust behavior. The agent develops something closer to judgment rather than just following rules it's been told not to break.

The Multi-Agent Question: Minimum Viable Agents Anthropic asks whether a single general-purpose agent or multi-agent architecture works better. Our answer: use as few agents as possible. Every handoff between agents is a potential break in reasoning continuity. For small projects or clean microservice architectures: two agents. A strategic orchestrator and a coding agent. For complex systems: add a code reviewer. Three agents maximum.

What Anthropic Missed: Human-in-the-Loop as Synchronization The Anthropic piece treats human involvement as bookends — you provide the prompt, you review the result. We built in a different way: the user can intervene at any point, the sub-agent's completion report doesn't reach the orchestrator until the user validates it. This started as a bug. It became our favorite feature. Why it matters: We do not limit autonomy. It's more about synchronizing understanding between human and AI at critical checkpoints.

What We Haven't Solved Yet Honesty time: end-to-end testing is still manual for us. Lint passes, unit tests run, but visual verification happens in a separate session with a human watching. Anthropic's Puppeteer integration for browser testing is genuinely useful. We haven't automated that layer yet. It's on the roadmap, right after "pay rent" and "sleep occasionally."

The Takeaway Long-running agents are hard. Anthropic's solutions work. Ours work differently. The philosophical difference: they lean toward programmatic constraints (JSON files, explicit flags, structured formats the model "can't" edit). We lean toward better task decomposition and human checkpoints. Neither approach is wrong — different contexts, different constraints. We're sharing what worked in ours. If you're building something similar, maybe it saves you a few iterations.

Screencap.me – Screen Recording in the Browser

https://screencap.me/
1•rickcarlino•1m ago•0 comments

Gated Attention for Large Language Models

https://arxiv.org/abs/2505.06708
1•xnhbx•1m ago•0 comments

Canadian mathematician becomes two-time World Champion in Scrabble

https://ottawacitizen.com/news/ottawa-scrabble-champion
1•heresie-dabord•5m ago•0 comments

Funding: The rpki-client project needs your help

https://www.rpki-client.org/funding.html
1•Panino•6m ago•0 comments

I heat my Essex home with a data centre in the shed

https://www.bbc.co.uk/news/articles/c0rpy7envr5o
1•BerislavLopac•7m ago•0 comments

Google CEO says ‘vibe coding’ made software development ‘so much more enjoyable’

https://www.google.com/url?q=https://indianexpress.com/article/technology/tech-news-technology/go...
2•ashishgupta2209•8m ago•2 comments

What Was the First Bookmark?

https://bookmarkstuff.com/blog/2025-11-30-the-first-bookmark
1•devrundown•10m ago•2 comments

People Who Care: A twelve-year-old on personality in tech product design

https://micahblachman.beehiiv.com/p/people-who-care
1•subdomain•11m ago•0 comments

Calculating compressed air requirements: Step-by-step instructions

https://scc-aircompressors.com/en/calculate-compressed-air-requirement/
2•rustoo•14m ago•1 comments

From Zero to GitHub: Starting a New Jj (Jujutsu) Repo

https://www.visualmode.dev/from-zero-to-github-starting-a-new-jj-jujutsu-repo
3•todsacerdoti•16m ago•0 comments

Show HN: Run LLMs locally with WebGPU

https://qwen-web.sdan.io/
1•sdan•17m ago•0 comments

Using Petri nets as a formal language for LLM-assisted development

https://github.com/pflow-xyz/go-pflow
1•orksliver•21m ago•2 comments

GitHub to Codeberg: My Experience

https://eldred.fr/blog/forge-migration/
2•todsacerdoti•24m ago•0 comments

HashJack Indirect Prompt Injection Weaponizes Websites

https://www.infosecurity-magazine.com/news/hashjack-indirect-prompt-injection/
2•nobody9999•27m ago•1 comments

How do we keep apps maintained on Flathub?

https://tim.siosm.fr/blog/2025/11/24/building-better-app-store-flathub/
3•coffeeaddict1•28m ago•0 comments

Interslavic

https://en.wikipedia.org/wiki/Interslavic
2•ingve•29m ago•0 comments

Surface Tension

https://iamstelios.com/blog/surface-tension/
1•i8s•29m ago•0 comments

The Thinking Game Film – Google DeepMind Documentary

https://thinkinggamefilm.com
23•ChrisArchitect•29m ago•12 comments

A three-layer memory architecture for long-running agents

1•mvyshnyvetska•31m ago•0 comments

Audinspect – An audio file inspector for A&R teams and producers

https://github.com/404oops/Audinspect
1•404oops•31m ago•0 comments

Coding is the purest form of art

1•learningstud•36m ago•1 comments

Show HN: Cognitive AI architecture prototype with identity, memory, initiative

https://ivanhonis.github.io/ai_home/
1•nDot_io•37m ago•0 comments

Aissist – my personal AI assistant CLI that remembers

https://github.com/albertnahas/aissist
1•albertnahas•38m ago•1 comments

Langjam Gamejam: Build a programming language then make a game with it

https://langjamgamejam.com/
2•birdculture•39m ago•0 comments

Why India struggles to clear its air

https://www.thehindu.com/sci-tech/energy-and-environment/why-india-struggles-to-clear-its-air/art...
1•saikatsg•41m ago•0 comments

Free Database with 4B+ reverse DNS records

https://ip.thc.org/docs/bulk-data-access
1•lakti_mosfit•41m ago•0 comments

Why I Built My Own Kubernetes Cluster on Hetzner Cloud

https://onatm.dev/2025/11/30/why-i-built-my-own-kubernetes-cluster/
5•onatm•42m ago•3 comments

Modern cars are spying on you. Here's what you can do about it

https://apnews.com/article/auto-car-privacy-3674ce59c9b30f2861d29178a31e6ab7
40•MilnerRoute•44m ago•17 comments

How I unlocked the Kimi K2 $0.99 offer?

https://www.kimi.com/share/19ad56ec-77b2-8e5e-8000-0000e19451c6
1•raghavankl•45m ago•0 comments

Atlas Shrugged

https://david-jasso.com/2024/04/11/atlas-shrugged/
15•mnky9800n•46m ago•8 comments