I built Blooming, a visual AI workspace where you drag-and-drop “nodes” (text, image, video) onto a whiteboard and chain AI text, image and video models together.
It's kinda like n8n but for AI art:
• Node-based canvas
• Multi-model switching (test different models side by side and iterate multiple versions)
• Pipe text or image output to other nodes to refine prompts, turn them into videos or explain images
Demo link: https://youtu.be/TdFzhxeRFNg
I’d love your feedback on UI clarity, performance, and which models to integrate next.
What’s confusing or missing for AI image and video power users?
Everything here is live in prod (you can sign in to use it). Trying to see if this is useful to people.
For those interested in the tech stack I used see comment below
Thanks for checking it out – Edrick
edrickdch•9h ago
I kept bouncing between model-specific UIs (Kling 1.6, Hailuo Minimax, Midjourney, …) and lost hours going through prompts, AI images and videos. I wanted a single canvas that lets me:
1. Sketch an idea visually – like wiring Lego blocks
2. Swap any model in seconds – no tab switching, see all the assets
3. Download the output easily – no watermarks
You can easily iterate on the same prompts, compare image model outputs, compare video model outputs, turn images into videos.
How it works under the hood
- Frontend: Next.js 15 + React-Flow for node graph
- Backend: Next.js + Python scripts
- Inference: Models from Replicate and OpenRouter
Roadmap
- Add more models
- Export assets into an editor
- Upload your own assets
- Websockets to enable multiple artists to collaborate
Feedback wanted:
- Any bottlenecks?
- Which model/node is highest on your wish-list?
Happy to answer anything! Would love to know how you’d use this, if you’d want to see anything different