I built a complete AI agent runtime that fits in 49KB. Written in Zig, zero external dependencies, runs on microcontrollers.
The full agent loop — call an LLM, parse the response, execute tools, iterate — on a $3 chip. Works with any OpenAI-compatible LLM provider (17+), persistent memory, auth, a skills/plugin system, and multi-channel I/O.
The Lite build (BLE only, no HTTP/TLS) is 49KB. The Full build with HTTP stack is ≤500KB. Both run on bare metal — no OS required.
Why Zig? No hidden allocations, no runtime, comptime for config. Every byte is accounted for. The entire codebase is 16 source files you can read in an afternoon.
BSL 1.1 license → Apache 2.0 in 2029. Free for <$1M revenue or <10K devices.
myonatan•1h ago
The full agent loop — call an LLM, parse the response, execute tools, iterate — on a $3 chip. Works with any OpenAI-compatible LLM provider (17+), persistent memory, auth, a skills/plugin system, and multi-channel I/O.
The Lite build (BLE only, no HTTP/TLS) is 49KB. The Full build with HTTP stack is ≤500KB. Both run on bare metal — no OS required.
Why Zig? No hidden allocations, no runtime, comptime for config. Every byte is accounted for. The entire codebase is 16 source files you can read in an afternoon.
BSL 1.1 license → Apache 2.0 in 2029. Free for <$1M revenue or <10K devices.
Site: https://krillclaw.com
Happy to answer questions about the architecture, Zig on embedded, or why I think every device should have an AI agent.