But the real power is incremental, voice-driven workflows. Picture a logistics dispatcher: "Open a map." Done. "Load orders.csv from the warehouse server." Done. "Plot the delivery addresses." Done. "Shortest route." Done. "Pull GPS from the delivery truck." Done. "Recalculate with live traffic and truck position. Keep updating." Done. Six machines, one voice conversation. Each step builds on the last — the canvas accumulates state, every element is versioned with full undo/redo, nothing breaks. That's not a demo, that's a Tuesday morning. Simpler things work too. "Create a button" — a button appears on the canvas. "Make it transparent with shadows" — it updates live. "Create a 3D car game" — a driving simulation with traffic appears alongside your other widgets. "Add multiplayer with machine B" — done.
The mechanism:
echo "plot delivery addresses on map" > /n/llm/coder/input cat /n/llm/coder/OUTPUT > /n/machine_name/scene/parse
A single response can target multiple machines simultaneously through intrinsic routing — the agent's output is split by machine and streamed to each one:
cat /n/llm/coder/A > /n/A/scene/parse cat /n/llm/coder/B > /n/B/scene/parse cat /n/llm/coder/C > /n/C/scene/parse
cat blocks until the agent starts generating, then streams code into each machine's scene parser. Widgets appear in real time. The multiplexer (riomux) stitches machines at the 9P wire level — mount a Raspberry Pi, a workstation, a delivery truck's onboard computer, and they're just directories. The agent's context includes what's already on every screen, so each new request builds on existing state. No unnecessary APIs. No message brokers. No orchestration framework. Just files, reads, and writes. Plan 9's idea, pushed as far as it goes.
Experimental, no security model. Isolated networks only.