There's just no way around the fact that games need a lot of play-testing and visualization, all of which is difficult to capture discrete unit tests.
Many of the games I've released using an agentic CLI (OpenCode) would have been absolutely impossible to vibe code.
That being the case, any library that’s been around long enough to be well represented in the training data of the larger LLMs will work perfectly fine. Love2D, Phaser, etc., would all be solid options.
- make the game as functional as possible: as in the game state is stored in a serializable format. New game states are generated by combining the current game state with events (like player input, clock ticks, etc)
- the serialized game state is much more accessible to the AI because it is in the same language AI speaks: text. AI can also simulate the game by sending synthetic events (player inputs, clock ticks, etc)
- the functional serialized game architecture is also great for unit testing: a text-based game state + text-based synthetic events results in another text-based game state. Exactly what you want for unit tests. (Don't even need any mocks or harnesses!)
- the final step is rendering this game state. The part that AI has trouble with is saved for the very end. You probably want to verify the rendering and play-testing manually, but AI has been getting pretty decent at analyzing images (screenshots/renders).
# Here is an example of a simple game developed with functional architecture: https://github.com/Leftium/tictactoe/blob/main/src/index.ts
- Yes, it's very simple but the same concepts will apply to more complex games
- Right now, there is only rendering to the terminal, but you could imagine other renders for the browser and game engines
The comment about games needing play-testing and visualization is spot on. What's worth adding is that this is actually a general principle — vibe coding works best when you have explicit architecture upfront constraining where and how generation happens.
This is one of the ideas formalized in the Agile Vibe Coding Manifesto (https://agilevibecoding.org) — specifically "Architecture guides and constrains generation." Whether you're building a game or a web app, if the AI is generating code without clear structural boundaries, you accumulate unmaintainable complexity fast. For game dev, that means thinking through your state representation, entity model, and rendering separation before you let the LLM loose.
For frameworks: anything with strong conventions and heavy LLM training data (Godot, Phaser, Love2D) will work better than opinionated engine-first workflows.
OsrsNeedsf2P•12h ago
I've been working on Ziva[0], an AI plugin for Godot that's explicitly for game development. Game development is hard and a different paradigm than regular coding, so there may be a learning curve, but we're working to flatten that out.
Happy to answer any questions!
[0] https://ziva.sh