The LLM generates only the glue logic between two deterministic interfaces: - a UI schema the frontend knows how to render - a constrained runtime interface exposed by the backend, from specific functions like list_comments to broader primitives like a restricted http_call
Both interfaces are fixed, typed, and inspectable. The LLM can’t invent UI components or backend capabilities. It only composes what already exists.
This is adjacent to A2UI, but the cut is different. A2UI renders agent actions into UI. Here, UI and runtime are first-class, and the LLM generates the composition logic without an open-ended action loop.
You can also think of it as MVC where the View and Model are fixed, and the Controller is generated on demand.
Write-up and short demo video: https://cased.com/blog/2025-12-11-how-we-build-generative-ui...
Interested in feedback on failure modes, constraints, and related prior art.