After testing the new Seedance 2.0 models from ByteDance, I noticed they handle scene changes differently. It feels like the model actually understands "editorial logic"—likely because ByteDance (the team behind CapCut/TikTok) trained it on professional editing patterns, not just raw pixels.
I built aiseedance2.app to experiment with this "narrative-first" workflow.
The Current Setup: The Seedance 2.0 API is still in a closed rollout, so I’ve launched this playground using Seedance 1.5 Pro as the engine for now. Even with 1.5 Pro, the temporal consistency and "shot flow" are significantly better than what I've seen in other models. I’ll be migrating to the 2.0 multi-modal reference system the second it's fully public.
Why this matters: If we want AI video to be used for actual filmmaking, the model needs to understand how to "cut" like a human editor. Seedance seems to be the first one to get this right.
I’d love to get your thoughts on the "flow" of these generations.