The problem: every time an LLM needs a chart, it generates chart code. Need a canvas? Generate canvas code. Need an editor? Generate editor code. Every conversation starts from zero. Nothing is reusable. The LLM burns tokens on boilerplate, and the result is fragile — one wrong CSS line and it breaks. Worse, the AI can't see or react to what the user does with the generated UI.
MUP is the opposite approach. A MUP is a pre-built .html file with declared functions. You build a chart MUP once, and every LLM can use it forever — by calling `renderChart(data)`, not by writing 200 lines of Chart.js setup. The LLM doesn't generate UIs. It operates them.
This changes what the LLM spends its intelligence on. Instead of fighting with CSS and rebuilding the same components over and over, it focuses on behavior — reading user interactions, deciding what to do, orchestrating between panels.
How the protocol works:
A MUP is a standard HTML file with a JSON manifest in a `<script type="application/mup-manifest">` tag. The manifest declares the MUP's name, description, and functions with JSON Schema inputs. That's it — the LLM never sees the HTML or CSS, only these declarations.
The host loads the MUP in an iframe, injects a tiny SDK (`mup.*`), and the MUP registers its function handlers. From there, two things can happen: (1) the LLM calls a function — the host routes it to the MUP, the UI updates, and the result goes back to the LLM. (2) The user interacts with the UI — the MUP calls `mup.notifyInteraction()`, which feeds a message into the LLM's context so it can react.
That's the entire protocol. No RPC framework, no streaming protocol, no handshake negotiation. A MUP declares what it can do; a host decides what to allow. ~400 lines of spec.
What's new since our post 4 days ago:
We built mup-agent, a local Node.js agent to show this working end-to-end. The agent loop runs on pi-agent-core, with pi-ai handling multi-provider LLM calls. Each MUP's declared functions are automatically registered as agent tools. When the LLM calls a tool, the agent routes it to the right MUP via WebSocket; when the user interacts with a MUP, the interaction is fed back into the agent loop. The browser is just a rendering surface — chat on the left, a grid of MUP iframes on the right.
You activate a pixel art canvas and a chart. You draw something. The LLM notices, analyzes it, and populates the chart — no prompt needed. The MUPs are reusable UI; the LLM is pure behavior.
git clone https://github.com/Ricky610329/mup.git
cd mup/mup-agent && npm install
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
npm start
16 built-in MUPs: chess, pixel art, drum machine, slides, kanban, file workspace, camera, voice, and more. Works with Anthropic, OpenAI, Google, Groq, xAI.Spec (~400 lines): https://github.com/Ricky610329/mup/blob/main/spec/MUP-Spec.m... Design philosophy: https://github.com/Ricky610329/mup/blob/main/spec/MUP-Philos... Repo: https://github.com/Ricky610329/mup