What it does: - Multi-shot narratives: up to 6 shots per generation, each with its own prompt and duration - Reference-driven consistency: upload 3 reference images to lock character appearance and style - 4 input modalities simultaneously: text + images (up to 9) + video clips (up to 3) + audio (up to 3) - @ reference system: assign specific roles to each input file (e.g., @Image1 for character, @Video1 for camera motion) - Output: 2K resolution, 24fps, with native audio sync and lip-sync
The problem: existing AI video tools generate single isolated clips. Want a 3-shot story? Generate 3 times, hope the character stays consistent, manually stitch. Sora 2 ($20/mo) gives you one shot at a time. Runway has an editing suite but no multi-shot generation.
Dola Seed 2.0 lets you define the full narrative arc upfront. Each shot gets its own direction. Character consistency comes from the reference system, not luck.
Tech-wise, the interesting tradeoff was multi-shot coherence vs per-shot flexibility. We use reference conditioning across shots rather than a single monolithic generation, which gives better individual shot quality while maintaining ~90% character consistency.
Free to try (no account required for first 3 generations): https://dolaseed.site
Would love feedback, especially on multi-shot coherence quality and the reference system UX.
yuni_aigc•2h ago