MUP (Model UI Protocol) lets you embed interactive UI directly in LLM chat. Each MUP is just a single .html file. The same functions can be triggered by the user (clicking a button) or by the LLM (function call). Both sides see each other's actions in real time.
The repo includes a PoC host and 9 example MUPs. Demo mode lets you interact with the UI side without an API key. Add an OpenAI key to see full LLM-UI collaboration.
Demo videos in the README show things like: drawing pixel art then charting its colors, a camera that captures a scene and the LLM recreates it, making beats on a drum machine with the LLM.
I'd love feedback on the protocol design.
newexpand•1h ago
The dual interaction model — where both the user and the LLM can trigger the same functions — is a nice design choice. It avoids the "watch the AI work" problem where you're just a spectator.
Curious about the protocol design: how do you handle conflicts when the user and LLM try to act on the same element simultaneously? And is there a way for MUPs to communicate with each other, or is each one isolated?
Ricky_Tsou•1h ago
MUP is just a protocol — it defines the format and communication between a MUP and its host, but it doesn't dictate how the host handles these things.
Conflict resolution depends on the host implementation. Different applications may need different strategies (queuing, locking, last-write-wins, etc.) — that's a host-level decision, not a protocol-level one.
Same for MUP-to-MUP communication. The protocol keeps each MUP isolated by design, but a host could absolutely build a coordination layer on top. In our PoC, the LLM acts as the orchestrator between MUPs, but that's just one approach.
The spec intentionally stays out of these decisions so it can work across different hosts and use cases.