Recently I saw how ClawHub had "341 malicious skills", and couldn't help but think how WASM/WASI resolves most of these issues by default, since everything is sandboxed.
So I've spent my weekend building Asterbot, a modular AI agent where every capability is a swappable WASM component.
Want to add web search? That's just another WASM component. Memory? another component. LLM provider? component.
The components are all sandboxed, they only have access to what you explicitly grant, e.g. a single directory like ~/.asterbot (the default). It can't read any other part of the system.
Components are written in any language (Rust, Go, Python, JS), sandboxed via WASI, and pulled from the asterai registry. Publish a component, set an env var to authorise it as a tool, and asterbot discovers and calls it automatically. Asterai provides a lightweight runtime on top of wasmtime that makes it possible to bundle components, configure env vars, and run it.
It's still a proof of concept, but I've tested all functionality in the repo and I'm happy with how it's shaping up.
Happy to answer any questions!