I built AISeedream5 as a lightweight web UI to generate visuals quickly: - text-to-image - image-to-image (with a reference image) - text-to-video / image-to-video (short clips)
The problem I was trying to solve: I often want to iterate on concepts (ad creatives, storyboards, thumbnails) without juggling multiple tools or losing prompt history. So I focused on a straightforward workflow: pick a mode, optionally add a reference image, write a prompt, generate, download.
Notes: - There’s a small free tier so you can try it without creating a full setup. - [If applicable: Not affiliated with the Seedream/ByteDance team; this is an independent UI.]
Tech / implementation: - [Your stack: e.g. Next.js + serverless API + a job queue for renders] - [Any details HN would care about: latency, caching, safety filters, etc.]
I’d love feedback on: 1) prompt UX (what’s confusing / missing) 2) what “consistency controls” you wish existed 3) bugs / edge cases you hit
Happy to answer questions.