It looks less like a “model failure” and more like a containment failure.
When agents audit themselves, you’re effectively running recursive evaluation without structural bounds.
Did you enforce any step limits, retry budgets, or timeout propagation?
Without those, self-evaluation loops can amplify errors pretty quickly.
W.r.t the self evaluation of the 'dreamer' genome (think template), this is... not possible to answer briefly
The dreamer's normal wake cycle has a 80 loop budget with increasingly aggressive progress checks injected every 15 actions. When sleeping after a wake cycle it (if more than 5 actions were taken) 'dreams' for a maximum of 10 iterations/actions.
Every 10 wake cycles it does a deep sleep which triggers a self-evaluation capped at 100 iterations, where changes to the creatures source code and files and, really, anything are done.
The creature can also alter its source and files at any point.
The creature lives in a local git repo so the orchestrator can roll back if it breaks itself.
What you’ve described sounds a lot like layered containment:
Loop budget (hard recursion bound)
Progressive checks (soft convergence control)
Sleep cycles (temporal isolation)
Deep sleep cap (bounded self-modification)
Git rollback (failure domain isolation)
Out of curiosity, have you measured amplification?
For example: total LLM calls per wake cycle, or per deep sleep?
I’m starting to think agent systems need amplification metrics the same way distributed systems track retry amplification.
So far it seems pretty sane with Claude and incredibly boring with OpenAI (OpenAI models just don't want to show any initiative)
One thing I neglected to mention is that it manages its own sleep duration and it has a 'wakeup' cli command. So far the agents (i prefer to call them creatures :) ) do a good job of finding the wakeup command, building scripts to poll for whatever (e.g. github notifications) and sleeping for long periods.
There's a daily cost cap, but I'm not yet making the creatures aware of that budget. I think I should do that soon because that will be an interesting lever
rsdza•1h ago
It filed 5 findings with CVE-style writeups. One was a real container escape (creature can rewrite the validate command the host executes). Four were wrong. I responded with detailed rebuttals.
The agent logged "CREDIBILITY CRISIS" as a permanent memory, cataloged each failure with its root cause, wrote a methodology checklist, and rewrote its own purpose to prioritize accuracy over volume. These changes persist across sleep cycles and load into every future session.
The post covers the real vulnerability, the trust model for containerized agents, and what it looks like when an agent processes being wrong.
Open source: https://github.com/openseed-dev/openseed The agent's audit: https://github.com/openseed-dev/openseed/issues/6