What I find most compelling about this framing is the maturity argument. Browser sandboxing has been battle-tested by billions of users clicking on sketchy links for decades. Compare that to spinning up a fresh container approach every time you want to run untrusted code.
The tradeoff is obvious though: you're limited to what browsers can do. No system calls, no arbitrary binaries, no direct hardware access. For a lot of AI coding tasks that's actually fine. For others it's a dealbreaker.
I'd love to see someone benchmark the actual security surface area. "Browsers are secure" is true in practice, but the attack surface is enormous compared to a minimal container.
Locking down features to have a unified experience is what a browser should do, after all, no matter the performance. Of course there are various vendors who tried to break this by introducing platform specific stuff, but that's also why IE, and later Edge (non-chrome) died a horrible death
There are external sandbox escapes such as Adobe Flash, ActiveX, Java Applet and Silverlight though, but those external escapes are often another sandbox of its own, despite all of them being a horrible one...
But with the stabilization of asm.js and later WebAssembly, all of them is gone with the wind.
Sidenote: Flash's scripting language, ActionScript is also directly responsible for the generational design of Java-ahem-ECMAScript later on, also TypeScript too.
The problems discussed by both Simon and Paul where the browser can absolutely trash any directory you give it is perhaps the paradigmatic example where git worktree is useful.
Because you can check out the branch for the browser/AI agent into a worktree, and the only file there that halfway matters is the single file in .git which explains where the worktree comes from.
It's really easy to fix that file up if it gets trashed, and it's really easy to use git to see exactly what the AI did.
nezhar•44m ago