i think the culture war point is also super true of the game design industry, not just the consumers, where the already ultra competitive nature of the work means that the creatives and the industry as a whole have taken a veeeery strong stance against genai. Thats a reckon, and i dont know if its good or bad.
It does feel a little counter to the march of progress, but in a medium where high effort can be enjoyed by many, im personally cool with artisinal handmade games.
The biggest barrier to success has always been having a good idea and AI is just going to make that ever more apparent, because you'll be able to cook up knockoffs ever more rapidly.
Nah, it's just a very vocal minority making lots of noise. We're slowly starting to see saner minds prevail, with steam backtracking on the ban of aigen and so on. People don't care if the end result is good. And the "but muh art" is just virtue signalling. I've used this example before, for me "art" is the fact that I can have a horse as my ruler in EU, not the fact that the horse might be generated by a diffusion model. Art is always in the eye of the beholder, or something.
On a more general note, like with the question "where are all the good open source projects", things will come. We're still in the learning phase of this new shiny toy that we got. It's only been 3 years since the tech is readily available, and maybe 2 years since people can take offline models and run with them. Give it time.
In the meantime, I see a lot of help from agentic coding in game dev. I've been following a space rouguelite thingy on itch, the developer has been pumping updates every other day, and the game is pretty much ready in ~1month. Sure, some updates broke stuff, some things didn't make much sense, but the speed with which the dev could take the feedback and pump out an update is remarkable. Small projects are doing it today. Larger projects will probably do it tomorrow.
My guess is that there's a big chance that the next "big thing" in gameplay w/ AI being somehow used will come from a smaller team / single dev type of thing, and not the big labs. Something genre-defining like minecraft or tarkov.
Signalling means you are doing the thing only to identify yourself to others as being in the tribe. That's kind of the opposite of my personal take.
Video games have discrete, static goals that let a player focus on an objective. Compared to LLMs, it’s a passive experience.
People play games for all sorts of reasons (to relax, competition, to build something of their own, solve challenges).
I think this is a fundamentally different experience than what an LLM can offer.
That’s not to say LLMs can’t become a fun experience, but it’s going to take decades to develop a way to procure that experience. Look at how long it took dungeons and dragons, or any video game genre, to get to the level of polish it’s at today
The real technical blocker is performance and voice synthesis. If voice synthesis was at parity with human actors, it would be worth it to battle the performance aspect for major studios. In text based games especially, taking time to generate enough text is just too slow to be convenient
I mean thats not a new game concept but I definitely think that levels up the experience.
The enemy locomotion in Arc Raiders was almost entirely created with reinforcement learning. It’s a very impressive modern example (the game has been out for less than 6 months).
Here’s the documentary explaining how the RL locomotion works. Skip to 10 min 36 sec. https://www.youtube.com/watch?v=DRlhpzc7ImA&t=636s
Having a body is fun. I think that's one reason why VR has such quick hype/death cycles--it doesn't do a good enough job of fooling your body. Conventional games induce more like a dissociative or hypnotic state where you temporarily forget your body. That can range from very, VERY abstract (like Pong or Pac-Man, or BABA IS YOU), or built on an attempt to simulate the real world as convincingly as possible through high-end graphics and physics engines.
One of the things that made Untitled Goose Game so much fun for me was that playing it made me _feel like a goose_. It made me want to run around doing goose things for goose reasons. You can spread your wings and honk, regardless of whether it advances the game. A similar game that came out called Little Kitty, Big City offers the promise of the same idea but as a cat instead of a goose. I tried that game but never felt like a cat playing it, instead it felt like being a person controlling a cat. These are such subtle shades of gameplay and storytelling that I have a hard time imagining LLMs being useful in the design.
https://components.news/the-gamer-and-the-nihilist/
that is is, people who are caught in AI FOMO are performatively trying to appear to be productive and that's the opposite of fun.
Speculative generation is expensive and time consuming and in most cases will just be the game company writing a check to a provider every time the player does something. It’s very difficult to imagine a revenue model which allows that to make sense. Even if you did get it to make sense, you then have to worry about their being a market risk to people associating your game with AI. I don’t think you need more than those two explanations to understand why in two or three years we have not seen games that the author describes come out.
Is it a possibility that talking to a chat bot is not any fun from a games per perspective? Yes it might even be very likely. But we’re not gonna have a real answer to that question tested out by real game developers until it becomes pragmatically possible to ship a game using these tools.
The cost of generation will come down and people will find clever uses for it and one way or another opinions about AI will change. Then maybe we’ll see whether or not these are any fun.
Lots of people "roleplay" with chatbots every day and it must be fun for them or they wouldn't do it.
The problem is mostly "how do you lock an LLM into the narrative context of a more structured game"?
Having an LLM roleplay as a _specific character_ in a _specific setting_ for a long period of time is a hard problem. Even maintaining consistency writing prose for more than maybe a chapter or two's worth of text is tricky.
I don't even really think the cost problem is relevant. If a game had a kickass gameplay loop that required you to put your own open ai token in to use, people would 100% do that. Maybe that wouldn't work for a AAA game, but not even an indy game has tried it or figured it out.
Claude can write and design and play games. I know this because I hooked up a MUD to an MCP server and it built a whole world and I had other agents joining and they talked to each other and solved problems together and built their own little sections of the MUD out.
It is actually fun! I have it online if people want to play -- just sign up, wait to get approved, add your mcp to claude code and tell it to play:
Every bit of actual game content was entirely written by ai agents with veerrrry little input from me.
It was actually an incredible experience … except for the quick “game” truncation.
Be interesting to see how Claw can improve on that. Give it some serious design time first.
Also thought about how a smart expansion of Zork would play.
Basically you create characters with bios, traits. Then a setting/context. Now, as you write your story, you can have multiple characters involved and they'll act from their perspective and traits.
Then these also have lorebooks with triggers, so if you mention The Barking Dog Inn, the AI and the characters will know what you mean and your characters with an outgoing personality type will be more eager to go there than others etc.
Finally, these systems usually have a long term memory where key events are saved and the AI remembers.
So a lot of this already exists!
In earlier text adventures (e.g., Infocom games), some portion of those constraints were due to the authors failing to anticipate legitimate ways that users would try to phrase things and account for them in the game. But that's not nearly such a problem in anything made since the late '90s, especially if you stick to xyzzy award winners.
The more essential reason for that constraint is that it's just good storytelling. The author of a work of IF has an idea they want to explore. That main idea could be narrative (Photopia or Anchorhead), or it could be a gameplay mechanic (Savoir-Faire or Counterfeit Monkey). But in any case, if your goal is to appreciate the creator's vision, those constraints are critical because they telegraph to you, the player, what you should and should not be exploring.
This isn't an idea that's specific to text adventures, either. The creators of the Outer Wilds deliberately made areas flat and boring when there wasn't anything there for the player to do to advance the story, specifically because they didn't want players wasting time on exploration that would ultimately prove to be pointless. This is also why open world games that do go for a more uniformly detailed world also need to hand-hold the player and tell them where they need to go every step of the way. Without that players would tend to get lost, lose their sense of progress, and ultimately end up bored.
I think that, because of this dynamic, using AI to flesh out the unimportant bits of the game would be a cardinal game design sin. Making bloat cheap and easy does not make it good. I just makes more of it.
As for generated content in regular games, I don’t see an issue if the content is high quality and free of errors. People don’t like low-quality content regardless of who generated it - human or LLM. It’s just that there’s currently more bad content coming from LLMs, that’s all
I have to imagine someone is looking into that, sandbox style games with hundreds of characters who have unique personalities and respond to any input and remember all your interactions... that would be amazing.
No one wants to move first with such a polarizing, unproven and rapidly changing tech.
Also having recently played a run of Civ5(before burning out because the late game is tedious and the overall game so unrealistic), I thought that good AI would help 4x games remain epic while letting the user choose their level of tedium. It would be nice, for instance, to manually go through the early game when management is easy and exploration is fun, only to turn that over to an AI at some point(and you generally do with autoexploration). Same thing with other eras though. By the time I'm in the late game and have a large empire, I'd rather focus on diplomacy and moving armies, not city management or worker management. I don't want to commit to fully giving up on those aspects of the game though. It would be nice to, say, automate war mostly but be able to jump in if I want to micromanage for a few turns.
I've said this before: In Mass Effect, characters are interesting because they are unique, principled, not always right, and are tested. Garrus for example is a former law enforcement official who got upsetty spaghetti about how sometimes the law fails to catch a criminal so he decides to start murdering them instead. In the course of several missions, you as the player personally experience this dynamic, hearing his story, seeing how the failure of real justice impacts people, talk with Garrus about his beliefs and how they changed, you can critique his position and point out ideals or whatever, and then after an hour of directly chewing on and mulling over the finer points of this textbook moral quandary, you are put in a position to make a snap decision whether you should kill a bad guy or let the courts handle it, including having your own motivations and incentives.
AI doesn't do that. There's no internal consistency, and certainly not consistency with the surrounding game. It won't build a character to explore a point, and also build levels and game and even background events to tie in with it. Do you remember any characters from playing AI dungeon a few years ago? People remember Garrus for decades.
Go read an AI generated book, they aren't very good. They aren't trying to say anything. They don't have a unique perspective or set of ideals they want to explore. They don't have a purpose in their content creation.
If you burn immense effort tuning a procedural generation system, it will occasionally spit out content that seems to have a story, but 95% of it will just be unconnected boring mush. Game paste.
If you want to know what a LLM produced story looks like go play a recent Call of Duty Campaign. It's just a list of 5 second Set Pieces and the entire point of the game is just to be dragged through set piece after set piece with barely the fragrance of a narrative in there. It's utterly boring and desensitizing.
It does not make a whole lot of sense to let an AI determine story, whether it be dialogue, changing the game state, or whatever. The reason basically being a few things - one, the fundamental aspect of a game is that there are rules and boundaries these rules need to remain consistent (and testable). It is entirely jarring if an AI NPC that says something that's not consistent with the game state, or changes the game state in a way that violates the constraint of the understood rules/boundaries of the game - this isn't fun to the user at all, even if it sounds cool. If you do not believe me, you can test this out trying to play DND with these things and see how annoying it can become. If you decide ok, I'm going to try to tightly bind what the AI can do or not with branching and rules, you are basically engineering the same way that games have engineered already, so why even use AI? It's a solution looking for a problem here.
The second big issue is determinism - LLM's are fundamentally non-deterministic. In most games, you would expect an action to have the same or at least very similar reaction most of the time - RNG comes into play naturally already in a lot of games in a predictable way (dice rolls, chance to hit, etc.) LLM's bring nothing new to the table here.
For world gen, we already have games like No Man's Sky deterministically generate quadrillions of worlds. What help is an LLM here? We already have the technology.
One area that would be interesting, is agentic bot players, but that sort of leads down the same path as the above arguments - there already are extremely sophisticated bots that play a huge variety of games. What do LLM's bring here?
https://www.complexsystemspodcast.com/episodes/narrative-mas...
First, the intersection between someone who understands and cares about genai (and keeping up with the SOTA), with someone who wants to put in the work and compromise to make great games, is slim.
Think about the intersection between a procedural generation/shader programmer, someone focused on making a really fun and compelling game tech be damned. The skills don't really overlap. One requires deep knowledge, technical mastery, geometric thinking, algorithmic optimization, and time in a dark room, and the other requires empathy for humans, brutal iteration, artistic taste, and a knack for storytelling (even if just to sell the player on your game). You basically need both for your "ai game" to even qualify as an ai game. And in an industry that's so anti-ai it's hard to find teams where these two personalities meet and vibe enough to work on a game together.
Secondly, is model performance: quality, speed, cost. I don't think the industry has yet crossed the threshold where generation is high enough quality, fast enough for players, and cheap enough that it amortizes compared to traditional gamedev. You see tons of AI used in development but deploying this stuff to the front lines doesn't really napkin math. That will change though.
Third, game engines are NOT designed for genai. They are decades of optimization for a completely different pipeline, both technical and human: pitch -> publisher -> game engine -> art team + coders -> trailer -> marketing -> release -> support. Adding even simple very, very basic genai requirements to this (say, dynamic texture generation and loading over the network, streaming character dialogue) completely breaks decades of assumptions of how games are built and shipped (e.g. everything mega-super-duper-compressed and loaded to statically allocated gpu buffers). So doing anything new means throwing away large parts of the game engine. And if you pitch not using 70% of the game engine, or making your own, to a 20 year game veteran/studio boss calling the shots in this industry: well, your game will not be getting funded.
This will all get fixed though. The technology jsut came faster than the industry could adapt.
Looking forward to GDC this year!
azath92•1h ago
Is referring to the nemesis system in Middle-Earth: Shadow of Mordor and Shadow of War, and its an amazing set of interlocking procedural systems that do genuinely feel like its AI, but is really AI in the sense its always been used by games (the rules the games follow to govern NPCs+world) and not AI in the sense of modern LLMs or even other generative systems. This video is a great look at what it is and why its great IMO https://www.youtube.com/watch?v=Lm_AzK27mZY
I think a system like this could really work well with some modern LLM stuff, but it certainly feels magic without it.
coldpie•1h ago
Too bad, it's patented! https://www.eurogamer.net/shadow-of-mordors-brilliant-nemesi...
Fuck software patents and every single person who has ever filed one.
mannanj•45m ago
coldpie•22m ago
The patent system benefits the uber-wealthy at the expense of everyone else, so no, that won't be happening.