While I was waiting for the night train to Oulu in Helsinki on Friday night, I pondered two problems I have been thinking about for years on the one hand, and for months on the other.
1. How can I build a simple *service-as-a-service* endpoint that pulls data, maybe does a little work on it (e.g. whittling it down, converting XML to JSON, etc.) caches it, and returns it? This seems to require *eval* to be really useful and then it's a whole deployment, code-review thing (for good reason).
2. How can I build an LLM-powered agent that… you get the idea. If service-as-a-service, why not *agent-as-a-service*? I'd already played with lang-chain and found it quite fiddly even for simple things, and this had led me to build a lighter, schema-first alternative to zod.
The epiphany I had was that these are the same question, and the problem was *eval*. So why not make *eval* completely safe?
The result is *agent-99* – a Turing-complete, cost-limited virtual machine that enables "Safe Eval" anywhere (Node, Bun, Deno, and the Browser). The runtime model is so simple it could easily be ported to any of your favorite languages: Python, Rust, Go, Java, Zig… Haskell. It's a perfect fit for Erlang.
Oh and because the underlying language is JSON schema, it's easy to hand it to an agent as a set of tools. So *agent-99* makes it easy to do that, too.
It's the infrastructure for *sky-net* but, you know, type-safe, deeply asynchronous, and without a halting problem.
Most Agent frameworks (like LangChain) rely on heavy "Chain" classes or graph definitions that are hard to inspect, hard to serialize, and hard to run safely on the client.
`agent-99` takes a different approach: *Code is Data.*
1. You define logic using a *Fluent TypeScript Builder* (which feels like writing standard JS).
2. This compiles to a *JSON AST* (Abstract Syntax Tree).
3. The AST is executed by a *Sandboxed VM* (~7kB gzipped core).
Or to put it another way:
1. A program is a function
2. A function is data organized in a syntax tree
3. An agent is a function that takes data and code
`agent-99` provides a *builder* with a fluent-api for creating your language and creating programs, and a *virtual machine* for executing those programs with explicitly provided capabilities.
### Why use a VM?
* *Safety:* It solves the halting problem pragmatically with a "Fuel" (Gas) counter. If the Agent loops forever, it runs out of gas and dies.
* *Security:* It uses *Capability-Based Security*. The VM has zero access to fetch, disk, or DB unless you explicitly inject those capabilities at runtime.
* *Portability:* Because the "code" is just JSON, you can generate an agent on the server, send it to the client, and run it instantly. No build steps, no deployment.
### Batteries Included (But Optional)
While the core is tiny, I wanted a "batteries included" experience for local dev. We built a standard library that lazy-loads:
* *Vectors:* Local embeddings via `@xenova/transformers`.
* *Store:* In-memory Vector Search via `@orama/orama`.
* *LLM:* A bridge to local models (like LM Studio).
### Proof of Concept
I sent the repo to a friend (https://github.com/brainsnorkel). He literally "vibe coded" a full *visual playground* in a couple of hours using the library. You can see the definitions generating the JSON AST in real-time.
podperson•7h ago
1. How can I build a simple *service-as-a-service* endpoint that pulls data, maybe does a little work on it (e.g. whittling it down, converting XML to JSON, etc.) caches it, and returns it? This seems to require *eval* to be really useful and then it's a whole deployment, code-review thing (for good reason). 2. How can I build an LLM-powered agent that… you get the idea. If service-as-a-service, why not *agent-as-a-service*? I'd already played with lang-chain and found it quite fiddly even for simple things, and this had led me to build a lighter, schema-first alternative to zod.
The epiphany I had was that these are the same question, and the problem was *eval*. So why not make *eval* completely safe?
The result is *agent-99* – a Turing-complete, cost-limited virtual machine that enables "Safe Eval" anywhere (Node, Bun, Deno, and the Browser). The runtime model is so simple it could easily be ported to any of your favorite languages: Python, Rust, Go, Java, Zig… Haskell. It's a perfect fit for Erlang.
Oh and because the underlying language is JSON schema, it's easy to hand it to an agent as a set of tools. So *agent-99* makes it easy to do that, too.
It's the infrastructure for *sky-net* but, you know, type-safe, deeply asynchronous, and without a halting problem.
- *agent-99 repo* https://www.google.com/search?q=https://github.com/tonioloew... - *agent-99-playground (Vibe Coded in ~2 hours):* https://github.com/brainsnorkel/agent99-playground
### The Core Idea
Most Agent frameworks (like LangChain) rely on heavy "Chain" classes or graph definitions that are hard to inspect, hard to serialize, and hard to run safely on the client.
`agent-99` takes a different approach: *Code is Data.*
1. You define logic using a *Fluent TypeScript Builder* (which feels like writing standard JS). 2. This compiles to a *JSON AST* (Abstract Syntax Tree). 3. The AST is executed by a *Sandboxed VM* (~7kB gzipped core).
Or to put it another way:
1. A program is a function 2. A function is data organized in a syntax tree 3. An agent is a function that takes data and code
`agent-99` provides a *builder* with a fluent-api for creating your language and creating programs, and a *virtual machine* for executing those programs with explicitly provided capabilities.
### Why use a VM? * *Safety:* It solves the halting problem pragmatically with a "Fuel" (Gas) counter. If the Agent loops forever, it runs out of gas and dies. * *Security:* It uses *Capability-Based Security*. The VM has zero access to fetch, disk, or DB unless you explicitly inject those capabilities at runtime. * *Portability:* Because the "code" is just JSON, you can generate an agent on the server, send it to the client, and run it instantly. No build steps, no deployment.
### Batteries Included (But Optional)
While the core is tiny, I wanted a "batteries included" experience for local dev. We built a standard library that lazy-loads:
* *Vectors:* Local embeddings via `@xenova/transformers`. * *Store:* In-memory Vector Search via `@orama/orama`. * *LLM:* A bridge to local models (like LM Studio).
### Proof of Concept
I sent the repo to a friend (https://github.com/brainsnorkel). He literally "vibe coded" a full *visual playground* in a couple of hours using the library. You can see the definitions generating the JSON AST in real-time.
### The Stack * *agent-99:* The runtime. * *tosijs-schema:* The underlying schema/type-inference engine https://github.com/tonioloewald/tosijs-schema
I’d love to hear your thoughts on this approach to code-as-data and agents-as-data.