I wonder if the game directors had actually made their case beforehand, they would have perhaps been let to keep the award.
That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.
No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating
For instance, see Luddites: https://en.wikipedia.org/wiki/Luddite
https://english.elpais.com/culture/2025-07-19/the-low-cost-c...
> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
Few care about the mainstream game review sites or oddball game award shows as their track record is terrible (Concord reviews).
Most go by player reviews, word of mouth, and social media.
danielbln•52m ago
hambes•35m ago
The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.
ahartmetz•29m ago
As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.
protimewaster•24m ago
1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.
2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.
NitpickLawyer•17m ago
The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.
blackbrokkoli•13m ago
Code is an abstract way of soldering cables in the correct way so the machine does a thing.
Art eludes definition while asking questions about what it means to be human.
jzebedee•12m ago
wiseowise•11m ago
According to your omnivision?
eucyclos•8m ago
The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.
If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.
altairprime•35m ago