I got frustrated with the endless back-and-forth prompting when building apps with AI. You know the drill: describe what you want, AI generates code, you ask for changes, rinse and repeat. It felt like we were missing something fundamental about how humans and AI should work together.
So I built Magic Canvas: what I call a visual space for context engineering. Instead of describing your app in text, you work directly on a canvas where the AI agent can see your layout, understand your design intentions, and write production code that matches exactly what you've created visually.
The key insight: when you can show instead of tell, the AI understands your context much better. Drag a button, resize a section, change colors — all of it translates directly to code changes. I've been using it to build multi-page applications, and it feels like a completely different way of working with AI. The visual context gives the agent so much more information than text prompts ever could.
Still early days, but I think this might be how we'll all be building with AI in the future. Would love to hear what you think.