Try it: https://ai.outscal.com/ Sample video: https://outscal.com/v2/video/ai-constraints-m7p3_v1/12-01-26...
You pick a style (pencil sketch or neon), enter a script (up to 2000 chars), and it runs: scene direction → ElevenLabs audio → SVG assets → Scene Design → React components → deployed video.
What we learned building this:
We built the first version on Claude Code. Even with a human triggering commands, agents kept going off-script — they had file tools and would wander off reading random files, exploring tangents, producing inconsistent output.
The fix was counterintuitive: fewer tools, not more guardrails. We stripped each agent to only what it needed and pre-fed context instead of letting agents fetch it themselves.
Quality improved immediately.
We wouldn't launch the web version until this was solid. Moved to Claude Agent SDK, kept the same constraints, now fully automated.
Happy to discuss the agent architecture, why React-as-video, or anything else.
neochief•3w ago
mayankkgrover•3w ago
Sorry for the late response.
We were making game dev courses and needed videos with readable code snippets. Since our output is React/SVG, text renders perfectly; it's actual text, not pixels. And if you spot a typo? Just edit the TSX. No re-prompting or re-rendering. The best use case is for explainer videos with code, technical diagrams, or anything where clean text matters, though we have seen other use cases as well.