I’ve been working on a new project: ray3aivideo.com , powered by Ray3, which we describe as the world’s first reasoning video model.
Ray3 is also the first model (to my knowledge) that can generate studio-grade HDR video, and it introduces a new Draft Mode for rapid iteration—helpful if you’re testing prompts or exploring creative workflows. On top of that, we’ve been focusing on physics realism and scene consistency, two areas where many current models still struggle.
What it does
Generate short videos from text prompts, optionally combined with images.
Two modes:
Draft Mode → very fast, rough outputs for quick idea testing.
Full Mode → higher fidelity with HDR and more consistent motion.
Emphasis on realistic physics and temporal stability.
Runs directly in the browser, no queue or signup required.
Why I built it
Existing models are often slow, inconsistent, or closed-off.
I wanted a setup where indie creators and developers could explore new workflows without waiting in line.
I’m curious to see how Ray3’s approach (reasoning + HDR + physics) performs compared to other systems.
Try it
https://ray3aivideo.com/?utm_source=info12138
Feedback I’d love
How does HDR output look on your devices?
Is Draft Mode useful for iteration, or should it behave differently?
Where does Ray3 still break or fail compared to other models?
This is still an early release, and I’d love to improve it based on real-world use cases.
Thanks for taking a look!