Unlike typical code-generation tools that rely on general-purpose LLMs, we have developed our own Multimodal LLM (MLLM) optimized for gaming. It doesn't just output text/code; it natively handles 3D models, spatial coordinates, and visual information to construct the game world.
Key Features:
Engine Support: We currently support real-time generation for Three.js and Unity WebGL.
Workflow: Think of it like "Claude Code" but for game creation. Instead of a single-shot generation (which rarely works for complex games), it’s an iterative process. You build the game step-by-step with the AI—refining mechanics, adjusting assets, and debugging in real-time.
Current Status:
You can successfully build basic demo-level games right now.
The Challenge: Our biggest focus is solving complex spatial understanding via our multimodal model. The AI sometimes struggles with intricate 3D spatial relationships, but we are pushing an update by the end of Q1 that should significantly improve this.
Try it out: We offer free credits upon registration so you can test the engine yourself.
I’d love to hear your feedback on the iteration workflow—does it feel intuitive to build a game this way?