So my team built an web app — give it a YouTube link to your music and it generates a 3D dance animation in under 2 minutes. The core is a diffusion-based music-to-motion model (mvnt-m4) trained on proprietary mocap/label data from professional choreographers.
I think dance is the missing piece of AI-generated content — just like how performance made K-pop a global phenomenon, and we believe AI dance will play the same role in the AI UGC era.
This is v0.1 — a fast, experimental playground. Dance quality is still improving (m4.1 in progress), and we're working on faster inference, and finger/facial motion generation. We're also preparing API integrations with platforms like Higgsfield.
I think our tech is quite validated through Epic MegaGrant but still very early in finding user validation. Would love honest feedback on the output quality and what you'd want to see next.
Also in product hunt: https://www.producthunt.com/products/mvntstudio
Demo vid: https://youtu.be/mjq2iAr96iM