I wanted to see how far vibe coding could actually go. Not autocomplete, not copilot — I gave Claude Opus 4.6 (running in OpenCode, an agentic CLI) full ownership of building a domain-specific language for generating 2D animated movies from code.
The constraint: I describe what I want, the agent builds it. I critique the output, the agent iterates. No human touches the code at any point. Not the parser, not the renderer, not the 3,300-line procedural character drawing module. Not even the demo scripts or SVG backgrounds.
It went through three major rewrites:
- V1: Rigid SVG characters sliding around. Puppet show.
- V2: Bone-based rig system. Still puppet-like.
- V3: Fully procedural — characters drawn frame-by-frame from bezier curves with body angle perspective, staggered joint timing, and spring physics on hair/clothing.
The most interesting parts of the process:
1. The agent kept over-engineering infrastructure while the actual visual output stayed at stick-figure quality. It took direct, blunt feedback ("this looks like a puppet show, you keep building the same thing") to break the pattern.
2. Poses were originally 9 hardcoded words in Rust. After I pointed out this was a limitation, the agent added a full custom pose system to the DSL — 23 controllable joint/expression parameters defined in the script file.
3. The agent added overlap detection after characters kept walking through each other. It now refuses to render if two characters would intersect at any point.
The output still looks like what it is — procedurally generated cartoon figures drawn with math. But 7,665 lines of Rust, a PEG grammar, procedural character rendering, camera systems, scene transitions, and a working demo — all from an AI agent that never saw a reference implementation.
Repo policy: only agent-generated code accepted. Bug reports welcome, but all implementation must come from agents.
Tech: Rust, tiny-skia for 2D rendering, pest for parsing, FFmpeg for video encoding.