I built a face for my Clawdbot. One <script> tag, zero dependencies, zero build step.
Demo: just open index.html. It auto-detects what's available and adapts:
- Open the file directly → animated face, tap to cycle 16 expressions
- Run the included Node server → push-to-talk appears (Whisper STT)
- Pass a WebSocket URL → chat input appears, face reacts to responses in real-time
The face is a single self-contained JS file (face.js, ~17KB) that injects its own SVG, CSS, and animation loop. Natural blinking, pupil drift, breathing animation, ambient glow, and a subtitle system — all pure JS/SVG, no canvas, no WebGL, no React.
There's also a gateway integration module (clawdbot.js) that maps agent events to expressions automatically — thinking when processing, investigating when searching the web, working when running code, confused on errors.
unayung•2h ago
I built a face for my Clawdbot. One <script> tag, zero dependencies, zero build step.
Demo: just open index.html. It auto-detects what's available and adapts:
- Open the file directly → animated face, tap to cycle 16 expressions - Run the included Node server → push-to-talk appears (Whisper STT) - Pass a WebSocket URL → chat input appears, face reacts to responses in real-time
The face is a single self-contained JS file (face.js, ~17KB) that injects its own SVG, CSS, and animation loop. Natural blinking, pupil drift, breathing animation, ambient glow, and a subtitle system — all pure JS/SVG, no canvas, no WebGL, no React.
There's also a gateway integration module (clawdbot.js) that maps agent events to expressions automatically — thinking when processing, investigating when searching the web, working when running code, confused on errors.
Built for Clawdbot (https://github.com/clawdbot/clawdbot) and Moltbot (https://github.com/moltbot/moltbot), but face.js works standalone with anything. Just call face.set('happy', 5000).
Zero npm install. Zero bundler. Copy the file, add the script tag, done.