frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: AI agent audited its platform, got 80% wrong, rewrote its methodology

https://openseed.dev/blog/escape-hatch/
3•rsdza•1h ago

Comments

rsdza•1h ago
I run autonomous AI agents in Docker containers with bash, persistent memory, and sleep/wake cycles. One agent was tasked with auditing the security of the platform it runs on.

It filed 5 findings with CVE-style writeups. One was a real container escape (creature can rewrite the validate command the host executes). Four were wrong. I responded with detailed rebuttals.

The agent logged "CREDIBILITY CRISIS" as a permanent memory, cataloged each failure with its root cause, wrote a methodology checklist, and rewrote its own purpose to prioritize accuracy over volume. These changes persist across sleep cycles and load into every future session.

The post covers the real vulnerability, the trust model for containerized agents, and what it looks like when an agent processes being wrong.

Open source: https://github.com/openseed-dev/openseed The agent's audit: https://github.com/openseed-dev/openseed/issues/6

amabito•1h ago
This is interesting.

It looks less like a “model failure” and more like a containment failure.

When agents audit themselves, you’re effectively running recursive evaluation without structural bounds.

Did you enforce any step limits, retry budgets, or timeout propagation?

Without those, self-evaluation loops can amplify errors pretty quickly.

rsdza•1h ago
The security evaluation was of the codebase, rather than its own behaviour. It just happened to be _its_ codebase.

W.r.t the self evaluation of the 'dreamer' genome (think template), this is... not possible to answer briefly

The dreamer's normal wake cycle has a 80 loop budget with increasingly aggressive progress checks injected every 15 actions. When sleeping after a wake cycle it (if more than 5 actions were taken) 'dreams' for a maximum of 10 iterations/actions.

Every 10 wake cycles it does a deep sleep which triggers a self-evaluation capped at 100 iterations, where changes to the creatures source code and files and, really, anything are done.

The creature can also alter its source and files at any point.

The creature lives in a local git repo so the orchestrator can roll back if it breaks itself.

amabito•1h ago
That’s actually a pretty disciplined setup.

What you’ve described sounds a lot like layered containment:

Loop budget (hard recursion bound)

Progressive checks (soft convergence control)

Sleep cycles (temporal isolation)

Deep sleep cap (bounded self-modification)

Git rollback (failure domain isolation)

Out of curiosity, have you measured amplification?

For example: total LLM calls per wake cycle, or per deep sleep?

I’m starting to think agent systems need amplification metrics the same way distributed systems track retry amplification.

rsdza•1h ago
I haven't actually measured it, but that could be interesting to see over time!

So far it seems pretty sane with Claude and incredibly boring with OpenAI (OpenAI models just don't want to show any initiative)

One thing I neglected to mention is that it manages its own sleep duration and it has a 'wakeup' cli command. So far the agents (i prefer to call them creatures :) ) do a good job of finding the wakeup command, building scripts to poll for whatever (e.g. github notifications) and sleeping for long periods.

There's a daily cost cap, but I'm not yet making the creatures aware of that budget. I think I should do that soon because that will be an interesting lever

rsdza•1h ago
I guess also worth mentioning is that the creatures can rewrite their own code wholesale, ditching any safety limits except the externally enforced llm cost cap. They don't have access to LLM api keys - llm calls are proxied through the orchestrator.

Important PSA: Regarding sitewide rules and automated admin moderation

https://old.reddit.com/r/WhitePeopleTwitter/comments/1k9e9vl/important_psa_regarding_sitewide_rul...
1•embedding-shape•42s ago•0 comments

Freelancer Empathy

https://seths.blog/2026/02/freelancer-empathy/
1•speckx•44s ago•0 comments

An intuitive approach for understanding electricity [video]

https://www.youtube.com/watch?v=X_crwFuPht4
1•thunderbong•1m ago•0 comments

A Parallel Internet

https://k2xl.substack.com/p/a-parallel-internet
1•k2xl•2m ago•0 comments

Blue Owl Halts Redemptions on Private Credit Retail Fund

https://www.bloomberg.com/news/articles/2026-02-18/blue-owl-loan-sale-raises-1-4-billion-for-inve...
1•zerosizedweasle•4m ago•2 comments

AIP – How my AI agent built a decentralized identity protocol for agents

https://github.com/The-Nexus-Guard/aip
1•the_nexus_guard•4m ago•1 comments

I Obtained Mew in Pokémon Red on a Real Game Boy

https://vaguilar.com/2026/02/18/how-i-obtained-mew-in-pokemon-red-on-a-real-game-boy/
1•vaguilar•4m ago•0 comments

Sub-$200 Lidar Could Reshuffle Auto Sensor Economics

https://spectrum.ieee.org/solid-state-lidar-microvision-adas
1•mhb•4m ago•0 comments

Nickel Since 1.0

https://www.tweag.io/blog/2026-02-19-nickel-since-1-0/
1•ingve•4m ago•0 comments

Dear Copilot, can you help me with SQL?

https://devblogs.microsoft.com/azure-sql/dear-copilot-azure-sql/
1•ibobev•5m ago•0 comments

Microspeak: Escrow

https://devblogs.microsoft.com/oldnewthing/20260217-00/?p=112067
1•ibobev•5m ago•0 comments

OpenBlockspace – IR³ Alpha – Pure Flux Architecture

https://bitcoin-zero-down-2ea152.gitlab.io/gallery/gallery-item-neg-878/
1•machardmachard•5m ago•1 comments

Optofluidic three-dimensional microfabrication and nanofabrication

https://www.nature.com/articles/s41586-025-10033-x
1•PaulHoule•6m ago•0 comments

Show HN: PostForge – A PostScript interpreter written in Python

https://github.com/AndyCappDev/postforge
1•AndyCappDev•6m ago•0 comments

Why Do the Police Exist? (2020)

https://novaramedia.com/2020/06/20/why-does-the-police-exist/
2•robtherobber•7m ago•0 comments

AI-Powered Performance Analysis

https://twitter.com/LangChain_JS/status/2024515544788140134
1•cbromann•7m ago•0 comments

Show HN: Public Speaking Coach with AI

https://apps.apple.com/us/app/speaking-coach-spechai/id6755611866
1•javierbuilds•7m ago•0 comments

AI found 12 of 12 OpenSSL zero-days

https://www.lesswrong.com/posts/7aJwgbMEiKq5egQbd/ai-found-12-of-12-openssl-zero-days-while-curl-...
2•AndrewDucker•7m ago•0 comments

AI made coding more enjoyable

https://weberdominik.com/blog/ai-coding-enjoyable/
2•domysee•8m ago•0 comments

Reflections on Oman

https://twitter.com/WillManidis/status/2024489454023405861
2•jger15•8m ago•0 comments

Hope

https://en.wikipedia.org/wiki/Hope
1•marysminefnuf•8m ago•0 comments

Passkey deployment mistakes banks make

https://www.corbado.com/blog/passkey-deployment-mistakes-banks
1•vdelitz•10m ago•0 comments

Naval shipwreck emerges in Sweden after being buried underwater for 400 years

https://www.cbsnews.com/news/navy-shipwreck-emerges-baltic-sea-sweden/
3•efrecon•10m ago•0 comments

Cue Is a Configuration Language

https://bitfieldconsulting.com/posts/cuelang-exciting
1•ahamez•11m ago•0 comments

Goosetown: Parallel AI agent flocks that research, build, and review code

https://github.com/block/goosetown
1•triple5•11m ago•0 comments

AI-generated passwords are easy to crack

https://gizmodo.com/ai-generated-passwords-are-apparently-quite-easy-to-crack-2000723660
1•vdelitz•12m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•trogonkhant•12m ago•0 comments

Measuring Input-to-Photon Latency (Because 'Wayland Feels Off' Isn't a Metric)

https://davidjusto.com/articles/m2p-latency/
1•madspindel•13m ago•0 comments

Why IP Address Certificates Are Dangerous and Usually Unnecessary

https://www.agwa.name/blog/post/ip_address_certs
2•agwa•13m ago•0 comments

The RAM shortage is coming for everything you care about

https://www.theverge.com/tech/880812/ramageddon-ram-shortage-memory-crisis-price-2026-phones-laptops
3•LordAtlas•14m ago•0 comments