I built Seedance3AI because I wanted a single, simple place to iterate on short visual ideas without hopping between tools.
It currently supports: - text → video (short clips) - image → video (animate a reference image) - AI image generation (useful for thumbnails/storyboards)
The workflow is intentionally minimal: pick a mode, set a couple of creative controls, generate, then download.
Implementation notes: - [stack: e.g. Next.js + serverless API + a job queue + object storage] - [anything you learned the hard way: rate limits, retries, prompt history, etc.]
What I’d love feedback on: 1) Which controls matter most (duration/aspect/seed/style, etc.) 2) Prompt UX: what’s confusing or missing 3) Any failure cases you hit (timeouts, weird outputs, etc.)
[Optional: This is an independent project and not an official ByteDance/Seedance release.]
Thanks!