Backstory: I've been using many agents in parallel as I work on a somewhat ambitious financial analysis tool. I was juggling agents working on epics for the linear solver, the persistence layer, the front-end, and planning for the second-generation solver. I was losing my mind playing whack-a-mole with the permission prompts. YOLO mode felt so tempting. And yet.
Then it occurred to me: what if YOLO mode isn't so bad? Decision fatigue is a thing. If I could cap the blast radius of a confused agent, maybe I could just review once. Wouldn't that be safer?
So that day, while my kids were taking a nap, I decided to see if I could put YOLO-mode Claude inside a sandbox that blocks exfiltration and regulates git access. The result is yolo-cage.
Also: the AI wrote its own containment system from inside the system's own prototype. Which is either very aligned or very meta, depending on how you look at it.
dfajgljsldkjag•1h ago
borenstein•1h ago
On the one hand, you're right: those commit messages are proof positive that the security is not perfect. On the other hand, the threat model is that most threats from AI agents stem from human inattention, and that agents powered by hyperscaler models are unlikely to be overtly malicious without an outside attacker.
There are some known limitations of the security model, and they are limitations that I can accept. But I do believe that yolo-cage provides security in depth, and that the security it provides is greater than what is achieved through permission prompts that pop up during agent turns in Claude Code.
catlifeonmars•1h ago
seg_lol•1h ago