I recently refactored my frontend to use a Generative UI pattern (inspired by Google's new A2UI framework) because I realized a static chat interface fails for complex shopping intents.
The Problem: A user buying a single item needs a completely different UX than a user planning a complex project. A standard "list of cards" doesn't work for both.
The Solution: I built an Intent-to-UI engine where the LLM decides the interface structure based on the query.
How the Logic Works:
Intent Classification: The LLM first classifies the prompt into one of three modes.
Dynamic Rendering: It returns a JSON schema that my custom React renderer maps to specific components:
Single Item Intent (e.g., "Best Gaming Monitor"): Triggers a Comparison View. It renders a "Best Match" card with detailed specs alongside 3 alternatives for quick comparison.
Bundle Intent (e.g., "Build an AMD Gaming PC"): Triggers a Grouped View. It clusters products by category (CPU, GPU, RAM) to ensure the build is complete.
DIY/Project Intent (e.g., "How to build a deck"): Triggers a Plan View. It renders a step-by-step timeline mixed with the required materials. The number of steps and product complexity dynamically adjusts based on the user's stated experience level.
The Stack:
Backend: Node.js / TypeScript
Search: pgvector (PostgreSQL) for semantic retrieval of Amazon/Retailer SKUs.
Frontend: React (with a custom renderer for the A2UI schemas).
Context: I pivoted to this "deep complexity" approach after Microsoft Copilot launched their generic shopping agent 24 hours after my initial beta. I realized I couldn't compete on generic search, so I’m focusing on the complex/messy projects that require dynamic UI adaptation.
It’s live in Beta. I’d love feedback on the "Intent Router"—try breaking it by asking for something ambiguous like "Coffee" vs "Coffee Station" to see if the UI adapts correctly.
Link: https://logicart.ai
MajidAliSyncOps•1h ago
What’s interesting here is how you’re letting intent drive structure instead of forcing users into one workflow. On the infra side, we notice a similar pattern: early flexibility boosts speed, but once usage grows, observability and failure boundaries become the real bottleneck.
Curious how you’re thinking about edge cases where user intent shifts mid-session—do you reclassify continuously or lock the UI once a path is chosen?
ahmedm24•1h ago