I think you’ll find most of the small teams making popular indie video games aren’t going to be interested in winning a pro-AI award.
Do they count procedural level generation as generative AI? Am I crazy that this doesn't seem clear to me?
> Games developed using generative AI are strictly ineligible for nomination.
I haven't found anything more detailed than that; I'm not sure if anything more detailed actually exists, or needs to.
And, second, what counts as generative AI? A lot of people wouldn't include procedural generative techniques in that definition, but, AFAIK, there's no consensus on whether traditional procedural approaches should be described as "generative AI".
And a third thing is, if I use an IDE that has generative AI, even for something as simple as code completion, does that run afoul of the rule? So, if I used Visual Studio with its default IntelliCode settings, that's not allowed because it has a generative AI-based autocomplete?
A bunch of 'if' is an "expert system", but I'm old enough to remember when that was groundbreaking AI.
I wonder if the game directors had actually made their case beforehand, they would have perhaps been let to keep the award.
That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
AI OK: Code
AI Bad: Art, Music.
It's a double standard because people don't think of code as creative. They still think of us as monkeys banging on keyboards.
Fuck 'em. We can replace artists.
It's more like the code is the scaffolding and support, the art and experience is the core product. When you're watching a play you don't generally give a thought to the technical expertise that went into building the stage and the hall and its logistics, you are only there to appreciate the performance itself - even if said performance would have been impossible to deliver without the aforementioned factors.
Games always have their game engine touch and often for indie games it's a good part of the process. See for example Clair Obscur here which clearly has the UE5 caracter hair. It's what the game can and cannot do and shapes the experience.
Then the gameplay itself depend a lot on how the code was made and iterations on the code also shape the gameplay.
Also pretty sure some programmers like Jonathan Blow avoid AI generated code like the plague.
Which LLM told you that?
> Almost all games currently being made would have programmers using VSCode.
Which clearly isn't the case, unless they like to suffer in regards to the Unreal and Unity integrations.
I think that is almost certainly untrue, especially among indie games developers, who are often the most stringent critics of gen ai.
Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.
No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating
This is not a winning PR move when most normal people are already pretty pro-artist and anti tech bro
For instance, see Luddites: https://en.wikipedia.org/wiki/Luddite
By all means, I use it. In some instances it is useful. I think it is mostly a technology that causes damages to humanity though. I just don't really care about it.
> But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”[1]
[1] https://www.smithsonianmag.com/history/what-the-luddites-rea...
https://english.elpais.com/culture/2025-07-19/the-low-cost-c...
> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
When someone goes three miles per hour over the speed limit they are literally breaking the law, but that doesn’t mean they should get a serious fine for it. Sometimes shit happens.
Anyway, I don't agree with banning generative AI, but if the award show wants to do so, go ahead. What has caused me to lose any respect for them is that they're doing this for such a silly reason. There's a huge difference between using AI to generate your actual textures and ship those, and.... accidentally shipping placeholder textures.
It really illustrates that this is more ideological than anything.
In that view, it doesn't matter whether you use it for placeholder or final assets. You paying your ChatGPT membership makes you complicit with the exploitation of that human creative output, and use of resources.
Few care about the mainstream game review sites or oddball game award shows as their track record is terrible (Concord reviews).
Most go by player reviews, word of mouth, and social media.
Next year a lot of families will struggle to buy a needed computer for their kids' school due to some multibillion techs going all-in.
Awards that focus on quality is too desired to not be a thing.
I expect generative AI to become a competitive advantage taken up by the vast majority.
To me, art is a form of expression from one human being to another. An indie game with interesting gameplay but AI generated assets still has value as a form of expression from the programmer. Maybe if it's successful, the programmer can afford to pay an artist to help create their next game. If we want to encourage human made art, I think we should focus on rewarding the big game studios who do this and not being so strict on the 2 or 3 person teams who might not exist without the help of AI.
(I say this knowing Clair Obscur was made by a large well respected team so if they used AI assets I think it's fair their award was stripped. I just wish The Game Awards would also consider using such a standard.)
Just two days ago there were reports that Naughty Dog, a studio that allegedly was trying to do away with crunch, was requiring employees to work "a minimum of eight extra hours a week" to complete an internal demo.
https://bsky.app/profile/jasonschreier.bsky.social/post/3mab...
These awards are behind the times and risk irrelevance.
What software in 2025 is written without AI help?
Every game released recently would have AI help.
For indie games in particular, that is very much not true. In fact, Steam has a 'made with AI' label, so it's not even true on that platform.
Before we know it we will have entrusted a lot to AI and that can be both a good or a bad thing. The acceleration of development will be amazing. We could be well on our way to expand into the universe.
those guys worked in AAA studios and they got a 10 millions budget
how "indie" is that?
danielbln•3h ago
hambes•3h ago
The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.
ahartmetz•2h ago
As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.
glimshe•2h ago
ahartmetz•1h ago
m4rtink•32m ago
protimewaster•2h ago
1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.
2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.
NitpickLawyer•2h ago
The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.
blackbrokkoli•2h ago
Code is an abstract way of soldering cables in the correct way so the machine does a thing.
Art eludes definition while asking questions about what it means to be human.
danielbln•2h ago
Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.
IshKebab•1h ago
NitpickLawyer•1h ago
kome•33m ago
saubeidl•2h ago
I consider some code I write art.
Jensson•2h ago
The game is art according to that definition while the individual assets in it are not.
perching_aix•1h ago
Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.
Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.
> while asking questions about what it means to be human
I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say
> the machine does a thing
to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.
This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?
booleandilemma•1h ago
surgical_fire•31m ago
If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?
And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?
Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.
oneeyedpigeon•1h ago
magicalhippo•1h ago
Ok but that's just a training issue then. Have model A be trained on human input. Have model A generate synthetic training data for model B. Ensure the prompts used to train B are not part of A's training data. Voila, model B has learned to produce rather than copy.
Many state of the art LLMs are trained in such a two-step way since they are very sensitive to low-quality training data.
jzebedee•2h ago
wiseowise•2h ago
According to your omnivision?
eucyclos•2h ago
The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.
If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.
conradfr•2h ago
david_shaw•2h ago
I'm not sure that LLMs respect that restriction (since they generally don't attibute their code).
I'm not even really sure if that clause would apply to LLM generated code, though I'd imagine that it should.
glimshe•1h ago
pona-a•1h ago
I always believed GPL allowed LLM training, but only if the counterparty fulfills its conditions: attribution (even if not for every output, at least as part of the training set) and virality (the resulting weights and inference/training code should be released freely under GPL, or maybe even the outputs). I have not seen any AI company take any steps to fulfill these conditions to legally use my work.
The profiteering alone would be a sufficient harm, but it's the replacement rhetoric that adds insult to injury.
stinkbeetle•1h ago
This reasoning is invalid. If AI is doing nothing but simply "learning from" like a human, then there is no "stealing from artists" either. A person is allowed to learn from copyright content and create works that draw from that learning. So if the AI is also just learning from things, then it is not stealing from artists.
On the other hand if you claim that it is not just learning but creating derivative works based on the art (thereby "stealing" from them), then you can't say that it is not creating derivative works of the code it ingests either. And many open source licenses do not allow distribution of derivative works without condition.
m-schuetz•55m ago
SirHumphrey•11m ago
But code is complicated, and hallucinations lead to bugs and security vulnerabilities so it's prudent to have programmers check it before submitting to production. An image is an image. It may not be as nice as a human drawn one, but for most cases it doesn't matter anyway.
The AI "stole" or "learned" in both cases. It's just that one side is feeling a lot more financial hardship as the result.
altairprime•3h ago
danielbln•2h ago