I think you’ll find most of the small teams making popular indie video games aren’t going to be interested in winning a pro-AI award.
Are you sure? Maybe not in gaming, but I'm sure most large companies create awards just to get them and mention them in marketing.
I wouldn't be surprised if the likes of EA and Ubisoft create a "best use of AI in gaming" award for next year.
> Games developed using generative AI are strictly ineligible for nomination.
I haven't found anything more detailed than that; I'm not sure if anything more detailed actually exists, or needs to.
And, second, what counts as generative AI? A lot of people wouldn't include procedural generative techniques in that definition, but, AFAIK, there's no consensus on whether traditional procedural approaches should be described as "generative AI".
And a third thing is, if I use an IDE that has generative AI, even for something as simple as code completion, does that run afoul of the rule? So, if I used Visual Studio with its default IntelliCode settings, that's not allowed because it has a generative AI-based autocomplete?
Sure there is. "Generative AI" is just a marketing label applied to LLMs - intended specifically to muddy these particular waters, I might add.
No one is legitimately confused about the difference between hand-built procedural generation techniques, and LLMs.
So I think Gen AI is an umbrella. The question is, do older techniques like GANs fall under Gen AI? It's technically a generative technique that can upscale images, so it's generating those extra pixels, but I don't know if it counts.
A bunch of 'if' is an "expert system", but I'm old enough to remember when that was groundbreaking AI.
I wonder if the game directors had actually made their case beforehand, they would have perhaps been let to keep the award.
That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
AI OK: Code
AI Bad: Art, Music.
It's a double standard because people don't think of code as creative. They still think of us as monkeys banging on keyboards.
Fuck 'em. We can replace artists.
It's more like the code is the scaffolding and support, the art and experience is the core product. When you're watching a play you don't generally give a thought to the technical expertise that went into building the stage and the hall and its logistics, you are only there to appreciate the performance itself - even if said performance would have been impossible to deliver without the aforementioned factors.
Games always have their game engine touch and often for indie games it's a good part of the process. See for example Clair Obscur here which clearly has the UE5 caracter hair. It's what the game can and cannot do and shapes the experience.
Then the gameplay itself depend a lot on how the code was made and iterations on the code also shape the gameplay.
- Final Fantasy 7 Rebirth clearly had two completely decoupled teams working on the main game and the open world design respectively
- Cyberpunk 2077 is filled with small shoeboxes of interactable content
Which LLM told you that?
> Almost all games currently being made would have programmers using VSCode.
Which clearly isn't the case, unless they like to suffer in regards to the Unreal and Unity integrations.
I think that is almost certainly untrue, especially among indie games developers, who are often the most stringent critics of gen ai.
Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.
No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating
We’ve observed this in AI gen ads (or “creatives” as ad people call them)
They work really well, EXCEPT if there is a comment option next to the ad - if people see others calling the art “AI crap” the click rate drops drastically :)
Let's imagine a scenario with two identical restaurants with the exact same quality of food.
One sells their dish as a fully vegan option, but doesn't tell the customers.
Hardline "oorah, meat only for me" dude walks in and eats the dish, loves it.
If he goes to the other restaurant and is told beforehand that "sir, this dish is fully vegan" - do you think they'd enjoy it as much?
Prejudices steer people's opinions, a lot. Just like people stop enjoying movies and games due to some weird online witch-hunt that might later on turn out to be either a complete willful misunderstanding of the whole premise (Ghost in the Shell) or a targeted hate campaign (Marvels and many many other movies starring a prominent feminist woman).
Look at how easy it is to make the argument in the other direction:
> People were told by large companies to like LLMs and so they did, then told other people themselves.
Those add nothing to the discussion. Treat others like human beings. Every other person on the planet has an inner life as rich as yours and the same ability to think for themselves (and inability to perceive their own bias) that you do.
What you derogatorily call normies are the rest of the world caring about their business until one day some tech wiz came around to say “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.
- annoyance at stupid AI features being pushed on them
- Playing around with them like a toy (especially image generation)
- Using them for work (usually writing tasks), to varying degrees of effectiveness to pretty helpful to actively harmful depending on how much of a clue they have in the first place.
Discussion or angst about the morality of training or threats to jobs doesn't really enter much into it. I think this apathy is also reflected in how this has not seemingly affected the sales of this game at all in the months that it has been reported on in the video game press. I also think this is informed by how most people using them can fairly plainly see they aren't really a complete replacement for what they actually do.
> “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.
Nope, most are just annoyed from AI slop bombarding them at every corner, AI scams getting news of claiming another poor grandma, and AI tech industry making shit expensive. Most people's job are not in current direct threat of being employed, unless you work in tech or art.
Amongst many other legitimate reasons.
No, I don't think I am.
> AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.
There's a whole lot of that for quite a long time targeting normie contexts, too; in fact, the hate in normie contexts is directly responsive to it, because the hype in normie contexts is a lot of particularly clumsy grifting plus the nontechnical PR of the big AI vendors (which categories overlap quite a bit, especially in Sam Altman’s case), and the hate in normy contexts shows basically zero understanding of even what AI is beyond what could be gleaned from that hyper plus some critical pieces on broad (e.g., total water and energy use, RAM price) and localized (e.g., from fossil fuel power plants in poor neighborhoods directly tied to demand from data centers) economic and environmental impacts.
> What you derogatorily call normies
I am not using “normie” derogatorily, I am using it to contrast to tech contexts.
I for one cannot wait for a future where grandparents get targeted ads showing their grandchildren, urging them to buy some product or service so their loved ones have something to remember them by...
Whereas AI seemed to have a pretty good run for around a decade, with lots of positive press around breakthroughs and genuine interest if you showed someone AI Dungeon, DALL-E 2, etc. before it split into polarized topic.
This is not a winning PR move when most normal people are already pretty pro-artist and anti tech bro
If that tangible result doesn't occur, then people will begin to criticize everything. Rightfully so.
I.e., the future of LLMs is now wobbly. That doesn't necessarily mean a phase shift in opinion, but wobbly is a prerequisite for a phase shift.
(Personal opinion at the moment: LLMs needs a couple of miracles in the same vein as the discovery/invention of transformers. Otherwise, they won't be able to break through the current fault-barrier which is too low at the moment for anything useful.)
There was a time that I remember when you could gripe at a party about banner ads showing up on the internet and have a lot of blank stares. Or ask someone for their email address and get a quizzical look.
I pointed my dad to ChatGPT a few days ago and instructed him on how to upload/create an AI image. He was delighted to later show me his AI "American Gothic" version of a photo of him and his current wife. This was all new to him.
The pushback though I think is going to be short-lived in a way other push-backs were short-lived. (I remember the self-checkout kiosk in grocery stores were initially a hard sell as an example.)
along with news about "AI" causing electricity bills to rise
every form of media is overrun and infested with poor quality slop
garbage products (microsoft copilot) forced on them and told by their bosses to use it, or else
gee I wonder why normal people hate it
Programmers criticized the code output. Artists and art enjoyers criticized cutting out the artist.
For instance, see Luddites: https://en.wikipedia.org/wiki/Luddite
By all means, I use it. In some instances it is useful. I think it is mostly a technology that causes damages to humanity though. I just don't really care about it.
> But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”[1]
[1] https://www.smithsonianmag.com/history/what-the-luddites-rea...
The issue is not the technology per se, it's how it's applied. If it eliminates vast swathes of jobs and drives wages down for those left, then people start to have a problem with it. That was true in the time of the Luddites and it's true today with AI.
They just don't like it when the machines are able to do the mediocre job they get paid to do.
Imagine if we had listened to the Luddites back in the day...
https://english.elpais.com/culture/2025-07-19/the-low-cost-c...
> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
When someone goes three miles per hour over the speed limit they are literally breaking the law, but that doesn’t mean they should get a serious fine for it. Sometimes shit happens.
Nobody is preventing the studio from working, or from continuing to make (ostensibly) tons of money from their acclaimed game. Their game didn't meet the requirements for one particular GOTY award, boo hoo
I understand where you’re coming from, but it’s perfectly sane if your legal system recognizes and accepts that speed detection methodologies have a defined margin of error; every ticket issued for speeding within that MoE would likely be (correctly) rejected by a court if challenged.
The buffer means, among other things, that you don’t have to bog down your traffic courts with thousands of cases that will be immediately thrown out.
The other way around seems more clear in a legal sense to me because we want to prove with as little doubt as possible that the person actually went above the speed limit. Innocent until proven guilty and all that. So we accept people speeding a little to not falsely convict someone.
Yes, you _could_ do some mental math and figure out that your speedometer is probably calibrated with some buffer room on the side of overreporting your speed, so you're probably actually doing 96km/h and you know you probably won't get dinged if you're dong 105km/h so you "know" you can probably do 110km/h per your speedometer when the sign is 100km/h.
Or you could just not. And that's the intention. The buffers are in there to give people space for mistakes, not as something to rely on to eke 10% more speed out of. And if you start to rely on that buffer and get caught on it, that's on you.
Like, using automatic lipsync is "generative AI", should that be banned ? Do we really want to fight with that purely work-saving feature ?
But you’re also not supposed to drive as close to the speed limit as possible. That number is not a target to hit, it’s a wall you should stay within a good margin of.
I understand analogies are seldom flawless, but the speed limit one in particular I feel does not apply because you can get a fine proportional to your infraction (go over the limit a little bit, small fine; go over it a lot, big fine) but you can’t partially retract an award, it’s all or nothing.
In the former sort of country, drivers are expected to use their judgement and often drive slower than the limit. In the latter sort of country, driving at the speed limit is rather... limiting, thus it is common to see drivers slightly exceeding the speed limit.
(I have a theory in my head that – in general – the former sort of country has far stricter licensing laws than the latter. I am not sure if this is true.)
Anyway, I don't agree with banning generative AI, but if the award show wants to do so, go ahead. What has caused me to lose any respect for them is that they're doing this for such a silly reason. There's a huge difference between using AI to generate your actual textures and ship those, and.... accidentally shipping placeholder textures.
It really illustrates that this is more ideological than anything.
You dont need to like their rules. Make your own and do better if you want to. Saying they shouldn't enforce their own rules because you don't like them sounds ridiculous. Saying they shouldn't enforce their own rules because it's somehow unfair is literal nonsense.
Would love to see more "I don't like these rules" and a lot less "these rules are fascist!".
Even if all the AI-generated content had been replaced before release, this would still be a lie.
They should rename to the Digital Amish game awards or something.
it's like having doping rules in sports and then disqualifying someone for using caffeine in their gym plants.
The question is stupid and I think Sandfall should be given the benefit of doubt that they interpreted the question not literally, but in a way which actually makes sense.
But I'm kinda thinking this isn't THAT serious =)
In that view, it doesn't matter whether you use it for placeholder or final assets. You paying your ChatGPT membership makes you complicit with the exploitation of that human creative output, and use of resources.
This is just another scheme where those at the top are appropriating the labor of many to enrich themselves. This will have so many negative consequences that I don't think any reactions against it are excessive.
It is irrelevant whether AI has "soul" or not. It literally does not matter, and it is a bad argument that dillutes what is really going on.
There is still human intentionality in picking an AI generated resource for surface texture, landscape, concept art, whatever. Doubly so if it is someone that create art themselves using it.
When's the last time someone with your opinion turned out to be right in the long run?
Of course, I am presuming you can read. I lean on optimism.
Expect the worst and you will never be disappointed.
There's also been an extremely effective propaganda campaign by the major entertainment industry players to get creatives to come out against AI vocally. I'd like to see what percentage of those artists made the statement to try and curry favor with the money suits.
Where can I find out more about this?
It'll never happen because the grift is the point.
It’s been insane to me to watch the “creative class”, long styled as the renegade and anti-authoritarian heart of society, transform into hardline IP law cheerleaders overnight as soon as generative law burst onto the scene.
And the environmental concerns are equally disingenuous, particularly coming from the video game industry. Please explain to me how running a bunch of GPUs in a data center to serve peoples LLM requests is significantly more wasteful than distributing those GPUs among the population and running people’s video games?
At the end of the day, the only coherent criticism of AI is that it stands to eliminate the livelihood of a large number of people, which is perfectly valid concern. But that’s not a flaw of AI, it’s a flaw of the IP laws and capitalistic system we have created. That is what needs addressing. Trying to uphold that system by stifling AI as a technology is just rearranging deck chairs on the Titanic.
It’s immersion breaking to try and talk to a random character only to hit a loop of one or two sentences.
How awesome would it be for every character to be able to have believable small talk, with only a small prompt? And it wouldn’t affect the overall game at all, because obviously the designers never cared to put in that work themselves
GenAI doing chore work is IMO the best use case
It doesn't seem strange that an industry award protects the workers in the industry. I agree, it seems harsh, but remember this is just a shiny award. It's up to the Indie Game Awards to decide the criteria.
Realistically, no.
Is it really though? After all it's just maybe a junior artist.
I've had to work with some form of asset pipeline for the past ten years. The past six in an actual game though not AAA. In all these years, devs have had the privilege of picking placeholders when the actual asset is not yet available. Sometimes, we just lift it off Google Images. Sometimes we pick a random sprite/image in our pre-existing collection. The important part is to not let the placeholder end up in the "finished" product.
> It's up to the Indie Game Awards to decide the criteria.
True and I'm really not too fond of GenAI myself but I can't be arsed to raise a fuss over Sandfall's admission here. As I said above, the line for me is to not let GenAI assets end up in the finished product.
Maybe, some sort of a temporary asset management system is required?
And up to us to decide whether The Indie Game Awards has impaired their credibility by choosing such a ridiculous criterion.
Do you think AAA game development teams pass on AI despite the fact that it produces better results at a fraction of the cost. I think not. Why would you cripple Indie developers by imposing such a standard on indie developers?
It seems completely out of touch with what's going on in the world of software development.
This argument in this industry is problematic. The entire purpose of computers is to automate processes that are labor intensive. Along the way, the automation process went from doing what was literally impossible with human labor to capturing ever deeper levels of skill of the practitioners. Contrast early computer graphics, which involved intensive human intervention, to modern tools. Since HN almost certainly has more developers than graphics artists, contrast early computer programming (where programmers didn't even have the assistance of assemblers and where they needed a deep knowledge of the computer architecture) to modern computer programming (high level languages, with libraries abstracting away most of the algorithmic complexity).
I don't know what the future of indie development looks like. In a way, indie development that uses third-party tools that captures the skills of developers and graphics artists traditionally found in major studios doesn't feel very indie. On the other hand, they are already doing that through the use of game engines and graphics design/modelling software. But I do know that a segment of the industry that utterly ignores those tools will be quickly left behind.
The game industry, especially AAA, is actually having major identity crisis right now as technology evolves and jobs adapt around the new tool of AI/LLMs. The game awards (not indie) should demonstrate this dolphin committee you fear already exists because the limiting factor in all industries are major resources: time, capital, experience. AI/LLMs will enable far more high skill work to be accomplished with less experience, time, and possibly capital (sidestepping ethics/practicality of data centers).
there is a whole basked of technologies which you can label as "gen AI" but which have non of the problems why people hate "gen AI"
as a very dump example, some pretty decent "line smoothing" algorithm are technically gen AI but have non of the ethical issues
> technically gen AI but have non of the ethical issues
so as an extrema example if you artists used that line smoothing algorithm you game isn't qualified anymore
it's an (maybe the most) extreme example of something which is "gen AI" but not problematic and as such a naive "rule" saying "no gen AI at all" is a pretty bad competition rule design
You're saying banning line smoothing algorithms for ethical reasons make no sense. I totally agree!
I'm wondering if this actually happens.
You'd have to cite an actual example of something this ridiculous. Non gen-AI algorithms have been line smoothing just fine for 2 decades now for less than a trillionth of the resources required to use gen AI for the same task.
Just a cudgel to shut down discussion
Was that before or after real people had their work scraped w/out permission or acknowledgement?
Describing these things as having no difference appears delibrately obtuse.
And pulling entirely-new classes of IP rights out of thin air doesn't?
Well after. Ever notice how a good game was made and then suddenly 50 like it appeared? Everyone scraped id soft's ideas and tech. Everyone followed Blizzard's ideas. The amazing thing that happened IMO is when companies started putting up patents so that such scraping couldn't be done.
One that comes to mind is the Shadow of Mordor nemesis system. Great idea, would've been neat to see in other games. Nope not allowed for 11 years. If things like this were around at the start of gaming it would likely be in a very sorry state.
As for patents, heard of Ralph Baer?
> "When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of Sandfall Interactive confirming the use of gen AI art in production on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination."
Whatever placeholder you use is part of your development process, whether it ships or not. Saying they used none when they did is not cool and rightfully makes one wonder what other uses they may be hiding (or “forgetting”). Especially when apparently they only clarified it when it was too late.
I can understand the Indie Game Awards preferring to act now. Had they done nothing, they would have been criticised too by other people for not enforcing their own rules. They no doubt would’ve preferred to not have to deal with the controversy. Surely this wasn’t an easy decision for them, as it ruined their ceremony.
We’re all bystanders here with very little information, so I’d refrain from using unserious expressions like “witch hunt”, especially considering their more recent connotations (i.e. in modern times, “witch hunt” is most often used by bad actors attempting to discredit legitimate investigations).
If it was malicious they wouldn't say a word. They probably interpreted the rule as "nothing in shipped game is AI" (which is reasonable interpreteation IMO), they implemented policy to replace any asset made by AI and just missed some texture.
Also the term was pretty vague, like, is using automatic lipsync forbidden ? That's pretty much generative AI, just the result is not picture but a sequence of movements.
They didn’t have a choice, it was obvious it was AI. They might still have other places where they used it but it’s harder to notice.
But some ai gen'd placeholders beak through and suddenly we're all about punishing an oversight?
That's the definition of a witch hunt, and it's past time we admit it.
You gave examples of a witch hunt. Not definition.
Sandfall opened themselves up to this when they accepted the nomination knowing the rules for this particular award prohibited AI use of any kind.
Note that there is also now a discussion about how much AI was used to design the enemies as well, given the rather bizarre appearances of many of them enemies in the latter portions of the game.
By that logic, "fake news" is now unusable because Trump weaponized it, despite the term accurately describing a real phenomenon that existed before and after his usage. "Gaslighting" would be suspect because it got picked up by people dramatizing ordinary disagreements. Every useful term for describing social dynamics gets captured by someone with an agenda eventually.
Hitler liked chocolate, doesn't mean you shouldn't eat chocolate. "You used a word that bad people also use" is not interesting - it's a way of avoiding the object-level debate while still claiming moral high ground.
So, in essence, you’re agreeing.
> Hitler liked chocolate, doesn't mean you shouldn't eat chocolate.
Arguments have nothing to do with dietary preferences, that comparison makes no sense.
Gaslighting would be simply incorrect, since gaslighting refers to an elaborate scheme of making somebody doubt their own perception/sanity. It a a severe form of abuse, requires an ongoing relationship with power dynamics (it cannot happen from a single instance of interaction), and typically results in long-term PTSD for the victim(s).
Agree on the capturing. Watering down terms is highly unfortunate for everyone.
The article where Meurisse admitted to using AI in the pipeline is from April. You're implying a level of dishonesty that clearly isn't there.
Few care about the mainstream game review sites or oddball game award shows as their track record is terrible (Concord reviews).
Most go by player reviews, word of mouth, and social media.
Next year a lot of families will struggle to buy a needed computer for their kids' school due to some multibillion techs going all-in.
Awards that focus on quality is too desired to not be a thing.
I expect generative AI to become a competitive advantage taken up by the vast majority.
To me, art is a form of expression from one human being to another. An indie game with interesting gameplay but AI generated assets still has value as a form of expression from the programmer. Maybe if it's successful, the programmer can afford to pay an artist to help create their next game. If we want to encourage human made art, I think we should focus on rewarding the big game studios who do this and not being so strict on the 2 or 3 person teams who might not exist without the help of AI.
(I say this knowing Clair Obscur was made by a large well respected team so if they used AI assets I think it's fair their award was stripped. I just wish The Game Awards would also consider using such a standard.)
I don't see the problem because it isn't cutting more artists out of the loop, if anything they get more of the meaningful work
Use AI to automate the boring shit so you can focus on the stuff that matters.
That throwaway texture that's on one model in the tutorial doesn't need days of focused work, so have the AI whip something up. Now your artist has two extra days for key art that's more visible.
Just two days ago there were reports that Naughty Dog, a studio that allegedly was trying to do away with crunch, was requiring employees to work "a minimum of eight extra hours a week" to complete an internal demo.
https://bsky.app/profile/jasonschreier.bsky.social/post/3mab...
How though? If questions about style or substance can be answered with "because the AI did it, its just some stochastic output from the model" I don't see how that allows for expression between humans.
Right now the rules they're using are going against larger forces in the world that are going to become standard (if they're not already).
And to your point, these are indie developers that are David's going up against the AAA Goliath's that have a bottomless purse with which to shower money on a "product". I dabble in art (and wrote some indie games decades ago) and I am fine with AI-generated art (despite my daughters' leanings in the opposite direction).
From the FAQ:
> The ceremony is developer-focused, with game creators taking center stage to present awards, recognize their peers, and openly speak without the pressure of time constraints.
https://www.indiegameawards.gg/faq
Regardless of AI's inevitability, I don't particularly care to celebrate it's use, and I think that's the most reasonable stance for a developer focused award to take.
We should be able to celebrate the creation, execution, concept of a game without letting AI assets nullify the rest.
These awards are behind the times and risk irrelevance.
What software in 2025 is written without AI help?
Every game released recently would have AI help.
For indie games in particular, that is very much not true. In fact, Steam has a 'made with AI' label, so it's not even true on that platform.
Machines? Bah, humbug!
/s
To others you may be addressing, I suspect they would say the ship has already sailed on textiles. Perhaps they are trying to sink this ship before it sails.
AI exists by calculations without invoking law or social agreement.
Which will endure?
IP depends on belief and enforcement.
AI depends on matter and energy.
Before we know it we will have entrusted a lot to AI and that can be both a good or a bad thing. The acceleration of development will be amazing. We could be well on our way to expand into the universe.
those guys worked in AAA studios and they got a 10 millions budget
how "indie" is that?
"Existing outside of the traditional publisher system, a game crafted and released by developers who are not owned or financially controlled by a major AAA/AA publisher or corporation, allowing them to create in an unrestricted environment and fully swing for the fences in realizing their vision."
In other words, "indie" means a developer-driven game independent of the establishment. It doesn't necessarily imply a low budget or the lack of professional experience.
What's the maximum developer count? Do outsourced assets count, if so, how? By the amount of people who directly worked on the assets by the outsource company or the whole headcount?
they had a whole orchestra of the size of whole indie game studios for the music alone, does that seem like indie?
Taking a scorched earth approach to AI usage is just being a luddite.
I'd love for them to create a separate category for "Best non-AI game". They can fight it out over that award. Perhaps then in a decade or so they will quietly let the award category fade away.
Where does the organisation intend to draw the line?
The answer to this question is always "somewhere". Just because I can't proclaim an exact number of trees that constitute a forest doesn't mean the concept doesn't exist.
No, but it becomes a dubious concept when you define forests as a collection of only conifer trees and that deciduous trees don't count for the definition of a forest.
But banning using AI at all while developing the game is... obviously insane on its face. It's literally equivalent to saying "you may not use Photoshop while developing your game" or "you may not use VS Code or Zed or Cursor or Windsurf or Jetbrains while developing your game" or "you may not have a smartphone while developing your game".
Just to nitpick, AAA game developers probably don't use the editors you mentioned since they do native applications.
If you can't use LLMs to generate placeholder graphics that don't ship in the actual game, then why can you use coding editors that let you use LLMs to generate code?
I work in mobile games and practically everyone is using either VSCode or some Jetbrains IDE. A few use Visual Studio but it has AI autocomplete too.
Of course you'd use Jetbrains for Android...
You need to crack open XCode only for very specific debugging tasks
To give even an inch under these circumstances seems like suicide. Every use of LLMs, however minor, is a concession to our destruction. It gives them money, it gives them power, it normalizes them and their influence.
I find the technology fascinating. I can think of numerous use cases I'd like to explore. It is useful and it can provide value. Unfortunately it's been deployed and weaponized against us in a way that makes it unacceptable under any circumstances. The tech bros and oligarchs have poisoned the well.
It’s just a tool, but like any tool it can be used the right way or wrong way. We as a society are still learning which is which.
I'm a programmer, and I enjoyed the sort of "craftsman" aspect of writing code, from the 1990s until... maybe last year. But it's over. Writing code manually is already the exception, not the rule. I am not an artist, and I also really do understand that artists have a more legitimate grievance (about stealing prior art) than we programmers do.
As a practical matter, though, that's irrelevant. I suspect being an "artist" working in games, movies, ads, etc will become much like coding already is: you produce some great work manually, as an example, and then tell the bots "Now do it like this ... all 100 of you."
Seems like a histrionic take.
https://rl.bloat.cat/preview/pre/k7zsc1nls7af1.jpeg?width=19...
This does make it a bit more suspicious. It seems unlikely they coincidentally used gen AI placeholders only for the one case where it’s absurdly obvious.
Learning. From the website:
> I started it on January 13, 2023, to learn something new and improve my GNU/Linux skills.
Also, not relying on a single service for one thing is a good thing, as Reddit itself demonstrated when they closed off API access.
A blanket ban is the way to go on this, people trying to muddy the waters professing they just have nuanced opinions know what they are doing... it's only a horse armour pack, it doesn't affect gameplay, you don't have to use it, you won't notice if it's not there...
It used to be there were tons of websites, like textures.com, which curated a huge database of textures, usable by art professionals and hobbyists alike. Some of it was free, others you had to pay for, both generally speaking, it wasn't too expensive, and if you picked up 3d modeling as a hobby, you could produce pretty decent results without spending a dime.
Then came the huge companies (you know which ones) which slurped up all these websites, and turned them into these SaaS monstrosities, with f2p mechanics. Textures were no longer free, but you had to pay in 'tokens' which you got from a subscription, which pushed you into opaque pricing models, bundling subscriptions, accidental yearly signups with cancellation fees, you know the drill.
Then came AI, which is somehow fair use, and instead of having to pay for that stuff, you could ask SD to generate a tiling rock texture for you.
Is this blatant copyrightwashing? I'd argue yes. But in this case, does copyright uphold any morally supportable princible, or does it help artists get paid?
F no.
More and more AAA games are going to have AI. Whether it’s AI content, AI dialogue, AI driven storytelling, or AI driven animation.
Having game of the year title stripped over some texture use is some next level petty BS.
Looks like "regulations nitpicking". In the end it doesn't represent the players best interests.
Doubly so if the usage was de minimis.
I think it's the artists, not the tools, that make the art. Overuse of anything is gauche; but I am confident that beautiful things can be made with almost any tool, in the hands of a tasteful artist.
I've never liked this argument. If AI is a tool, then having my own personal woodworker on staff makes me a woodworker too.
Rubbing a brush on a canvas was good enough for the renaissance masters, why are we collectively okay with modern "artists" using "virtual brushes" and trivializations of the expressive experience like "undo" when it's not "real art" because they're leaning so heavily on the uncaring unthinking machine and the convenience in creation it offers rather than suffering through the human limitations that the old masters did? Are photographers not artists too then, because they're not actually creating, just instead capturing a view of what's already there?
The usual response to this is some trite response about how AI is 'different' because you're 'just' throwing prompts at it and its completely creating the output itself -- as if it's inconceivable that there might be someone who doesn't just shovel out raw outputs from an AI and call it 'art' and is instead actually using it in a contributatory role on a larger composition that they, themselves, as a human, are driving and making artistic decisions on.
E33 is a perfect example here. Is the artistic merit of the overall work lessened by it having used AI in part of its creation? Does anyone really, truly believe that they abdicated their vision on the overall work to machines?
Just because someone can drag and drop to draw a circle in an image editing app instead of using their own talent and ability to freehand it instead doesn't mean what they then go on to do with that circle isn't artistic.
Like most things, art exists on a spectrum and there are many levels. Most would say a single pixel isn't art, yet at some point many cross some invisible line where it becomes art. Likewise, at some point a bunch of logic and pixels become a best-selling indie game. It's more than the sum of its parts, and I don't agree with saying that sum is suddenly less just because one of those parts was AI generated. The sum should logically be the same value regardless.
But then that's a very mathematical way of looking at it. Art and the appreciation of it has never been logical, but instead emotional. AI invokes negative emotions in many people, and so the art is diminished in their eyes. This makes sense to me.
However, I don't necessarily agree with this approach of yanking back the award. It reeks of horse buggy whip manufacturers trying to push back the tide. But then I've never understood comparing one piece of art to another and declaring one the winner. If art is simply something that invokes emotion in the viewer, and everyone's emotional response is different, it makes no sense to have awards to me.
Obviously the woodworker is a person. And you would be on a team that has woodworking as part of their skillset.
But the way you set up your reductio-ad-absurdum it can be read as implying the AI is a person too. O:-)
You know what, rather than just going for a flip rhetorical takedown, what if we took that implication seriously for a second?
What if you did mean to argue that (the) AI is a proto-person. Say you argue that they deserve to be in the credits as a (junior?) member of the team. That'd be wild! A really interesting framing, which I haven't heard before.
Or the weaker version: Use said framing pro-forma as a (practical?) legal fiction. We already have rules on (C) attribution. It might be a useful framing to untangle some of the spaghetti.
they uses some AI placeholders during development as it can majorly speed up/unblock the dev loop while not really having any ethical issues (as you still hire artists to produce all the final assets) and in some corner case they forgot to replace the place holder
also some of the tooling they might have used might technically count as gen AI, e.g. way before LLM became big I had dabbled a bit in gen AI and there where some decent line work smoothing algorithms and similar with non of the ethical questions. Tools which help removing some dump annoying overhead for artists but don't replace "creative work". But which anyway are technical gen AI...
I think this mainly shows that a blank ban on "gen AI" instead of one of idk. "gen AI used in ways which replaces Artists" is kinda tone deaf/unproductive.
Frankly, in the wider debate, I think engagement algorithms are partially to blame. Nuanced approaches don't get engagement, so on every topic everyone is split into two or more tribes yelling at each other. Folks in the middle who just want to get along have a hard time.
(Present company excepted of course. Dang is doing a fine job!)
Zero-effort placeholders have existed for decades without GenAI, and were better at the job. The ideal placeholder gives an idea of what needs to go there, while also being obvious that it needs to be replaced. This [1] is an example of an ideal placeholder, and it was made without GenAI. It's bad, and that's good!
[1] https://www.reddit.com/r/totalwar/comments/1l9j2kz/new_amazi...
A GenAI placeholder fails at both halves of what a placeholder needs to do. There's no benefit for a placeholder to be good enough to fly under the radar unless you want it to be able to sneak through.
this means that for some use cases (early QA, design 3D design tweaks before the final graphic is available etc.) they are fully useless
it's both viable and strongly preferable to track placeholders in some consistent way unrelated to their looks (e.g. have a bool property associated with each placeholder). Or else you might overlook some rarely seen corner cases textures when doing the final cleanup
so no, placeholder don't need to be obvious at all, and like mentioned them looking out of place can be an issues for some usages. Having something resembling the final design is better _iff_ it's cheap to do.
so no they aren't failing, they are succeeding, if you have proper tooling and don't rely on a crutch like "I will surely notice them because they look bad"
Right. The game is not eligible for the award. This is not a comment on the quality of the game.
The Indie Game Awards require zero AI content. The devs fully intended to ship without AI content but made a mistake, disqualifying themselves for the award. This is simply how competition rules work. I have a friend who plays competitive trading card games, and one day he showed up to a national event with an extra card in his box after playing with some friends late at night. It was an honest mistake, and the judges were quite sympathetic, but he was still disqualified!
BTW, the game is incredible.
Very curious to hear what channels you follow and how often per week. My RSS feed was spammed by it this and previous years
He just pointed out (correctly) that the game awards that were being spoken of everywhere for the last few weeks were not the one related to this article.
I on the other hand will add a judgement to this discussion: if you consider the game awards a joke, which is the by far most watched event in gaming, eclipsing (by viewer count) other entertainment events in sports such as the NBA finals... You've certainly got "interesting" opinions.
I think it's exactly their popularity that lead people to call the big awards shows "a joke." Pretty common with stuff like the Emmys and Grammys.
That and utter surprise of the lack of recognition they had towards Silksong.
But i don't mind people using AI it's their own choice, the focus then just becomes in the curation skill of the individual, team, company etc of the generated AI output. So taking away the award is kind of weak given people enjoyed the game.
To nitpick: the independent game awards are the Luddites here. The Luddites were a protest movement, not just a group of people unfamiliar with technology.
In the historical context that's apparently become appropriate again, Luddites violently protested the disruptive introduction of new automation in the textile industry that they argued led to reduced wages, precarious employment, and de-skilling.
The founders of this studio come from rich family backgrounds, to think they have anything in common with what the average person understands as an "indie game" developer is laughable. For example, they supposedly rented an office to work in, in a building owned by the founder's father's real estate firm, of course.
Projects like these used to be called AA games. It's a fantastic game, it doesn't have to be indie to be good.
The Indie Game Awards, despite sounding similar to The Game Awards, is an unrelated organization that holds their awards the same week. They are small and this is their second year.
Does TDD and LLMs have a kernel of utility in them, yeah, I don't see why not. But what the majority of people are saying doesn't seem to be true and what the minority of people I can actually see using them 'for reals' are doing just doesn't applicable to anything I care about.
With that in mind, the only thing less real to me than a tool that I have to vibe with at a social zeitgeist level to see benefits from is an award when I already have major financial and industrial success.
Half the people in my team has played the game. For months all I would hear about w.r.t. games was how this game was smashing milestones and causing the entire industry to do some soul searching or putting their fingers in their ears.
I'm sure they can console themselves from having lost this award with their piles of money.
[An LLM did help me with a cryptography api that was pretty confusing, but I still had to problem solve that one because it got a "to bytes" method wrong. So... once in a blue moon acceleration for things I'm unfamiliar with, maybe.]
For those who might care, we use generative AI as much as possible in every way possible without compromising our vision, this includes sound, art, animation, and programming. These are often edited or entirely redone (effectively placeholders). It's part of the process, similar to using procedural art generation tools like geometry nodes in Blender or fluid sim particles generators.
And btw, both UE5 and Unity now have gen AI features (and addons) that all developers can and will use.
If they want to ban AI from their show that is their perogative, but considering that every nominee probably used AI somewhere (I'd bet money on this), this feels like blatantly dishonest posturing.
Not if it's against the rule. They got caught with skidmarks. And while the "Ackshually, those skidmarks are just placeholders"-defense may elicit a few cheap laughs, it doesn't matter if you follow the rule to its logical conclusion. Any possible deception in such cases comes on top of it. As it always has; that doesn't change just because you found a new plaything (LLMs) in the box.
The IGA FAQ states, in its entirety on this topic: "Games developed using generative AI are strictly ineligible for nomination." [1]
Sandfall probably interpreted this reasonably: no AI assets in the shipped product. They say they stripped out AI placeholders before release (and patched the ones they missed). But the IGA is reading it strictly: any use during development disqualifies.
If that's the standard, it gets interesting. DLSS and OptiX are trained neural networks in an infrastructure-shaped raincoat—ML models generating pixels that were never rendered. If you used Blender Cycles with OptiX viewport denoising while iterating on your scenes, you "developed using generative AI."
By a strict reading, any RTX-enabled development pipeline is disqualifying. I wonder if folks have fully thought this through.
[1] https://www.indiegameawards.gg/faq (under "Game Eligibility")
DLSS and Cycles denoising are, well, denoising. It's the same process as denoising in Stable Diffusion, essentially, and was trained in the same way.
Upscaling technologies are transformative but post-processing. The uproar isn't over what happens in the render pipeline but in the creative process.
Same reason why auto-LoD generation wouldn't and hasn't pissed anyone off: it's not generating LoDs of a mesh that's problematic, it's generating the source model that an artist would create.
Developed and shipped are different words.
> But the IGA is reading it strictly
You meant they meant it strictly? They wrote the policy.
Putting essentially arbitrary limitation on which tools game developers are allowed to use is just nonsensical. Yes, the output of AI models can be really bad, but then a game obviously does not deserve an award. Especially for an indie game, with limited resources, AI can be a huge force multiplier. Gatekeeping awards based on these meaningless characteristics seems just very strange.
If a game is well made and people enjoy it then what's the problem with utilising AI generated code or assets? What's the objective?
danielbln•1mo ago
hambes•1mo ago
The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.
ahartmetz•1mo ago
As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.
glimshe•1mo ago
ahartmetz•1mo ago
m4rtink•1mo ago
orwin•1mo ago
protimewaster•1mo ago
1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.
2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.
NitpickLawyer•1mo ago
The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.
blackbrokkoli•1mo ago
Code is an abstract way of soldering cables in the correct way so the machine does a thing.
Art eludes definition while asking questions about what it means to be human.
danielbln•1mo ago
Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.
IshKebab•1mo ago
NitpickLawyer•1mo ago
kome•1mo ago
viraptor•1mo ago
saubeidl•1mo ago
I consider some code I write art.
theshrike79•1mo ago
Jensson•1mo ago
The game is art according to that definition while the individual assets in it are not.
perching_aix•1mo ago
Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.
Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.
> while asking questions about what it means to be human
I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say
> the machine does a thing
to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.
This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?
booleandilemma•1mo ago
surgical_fire•1mo ago
If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?
And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?
Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.
tpmoney•1mo ago
All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.
oneeyedpigeon•1mo ago
magicalhippo•1mo ago
Ok but that's just a training issue then. Have model A be trained on human input. Have model A generate synthetic training data for model B. Ensure the prompts used to train B are not part of A's training data. Voila, model B has learned to produce rather than copy.
Many state of the art LLMs are trained in such a two-step way since they are very sensitive to low-quality training data.
thatswrong0•1mo ago
Yeah right. AI art models can and have been used to basically copy any artist’s style many ways that make the original actual artist’s hard work and effort in honing their craft irrelevant.
Who profits? Some tech company.
Who loses? The artists who now have to compete with an impossibly cheap copy of their own work.
This is theft at a massive scale. We are forcing countless artists whose work was stolen from them to compete with a model trained on their art without their consent and are paying them NOTHING for it. Just because it is impressive doesn’t make it ok.
Shame on any tech person who is okay with this.
NeutralCrane•1mo ago
Concerns about the livelihood of artists or the accumulation of wealth by large tech megacorporations are valid but aren’t rooted in AI. They are rooted in capitalism. Fighting against AI as a technology is foolish. It won’t work, and even if you had a magic wand to make it disappear, the underlying problem remains.
tstrimple•1mo ago
jzebedee•1mo ago
wiseowise•1mo ago
According to your omnivision?
eucyclos•1mo ago
The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.
If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.
conradfr•1mo ago
david_shaw•1mo ago
I'm not sure that LLMs respect that restriction (since they generally don't attibute their code).
I'm not even really sure if that clause would apply to LLM generated code, though I'd imagine that it should.
glimshe•1mo ago
swiftcoder•1mo ago
Note that this tends to require specific license exemptions. In particular, GCC links various pieces of functionality into your program that would normally trigger the GPL to apply to the whole program, and for this reason, those components had to be placed under the "GCC Runtime Library Exception"[1]
[1]: https://www.gnu.org/licenses/gcc-exception-3.1.html
1gn15•1mo ago
hofrogs•1mo ago
NeutralCrane•1mo ago
In the end it doesn’t matter. Here “learning” means observing an existing work and using it to produce something that is not a copy.
blibble•1mo ago
so... all of them
pona-a•1mo ago
I always believed GPL allowed LLM training, but only if the counterparty fulfills its conditions: attribution (even if not for every output, at least as part of the training set) and virality (the resulting weights and inference/training code should be released freely under GPL, or maybe even the outputs). I have not seen any AI company take any steps to fulfill these conditions to legally use my work.
The profiteering alone would be a sufficient harm, but it's the replacement rhetoric that adds insult to injury.
starkparker•1mo ago
There are artists who would (and have) happily consented, licensed, and been compensated and credited for training. If that's what LLM trainers had led with when they went commercial, if anything a sector of the creative industry would've at least considered it. But companies led with mass training for profit without giving back until they were caught being sloppy (in the previous usage of "slop").
stinkbeetle•1mo ago
This reasoning is invalid. If AI is doing nothing but simply "learning from" like a human, then there is no "stealing from artists" either. A person is allowed to learn from copyright content and create works that draw from that learning. So if the AI is also just learning from things, then it is not stealing from artists.
On the other hand if you claim that it is not just learning but creating derivative works based on the art (thereby "stealing" from them), then you can't say that it is not creating derivative works of the code it ingests either. And many open source licenses do not allow distribution of derivative works without condition.
program_whiz•1mo ago
Analogy: the common area had grass for grazing which local animals could freely use. Therefore, it's no problem that megacorp has come along and created a massive machine which cuts down all the trees and grass which they then sell to local farmers. After all, those resources were free, the end product is the same, and their machine is "grazing" just like the animals. Clearly animals graze, and their new "gazelle 3000" should have the same rights to the common grazing area -- regardless of what happens to the other animals.
joquarky•1mo ago
stinkbeetle•1mo ago
stinkbeetle•1mo ago
The analogy isn't really helpful either. It's trivially obvious that they are different things without the analogy, and the details of how they are different are far too complex for it to help with.
m-schuetz•1mo ago
SirHumphrey•1mo ago
But code is complicated, and hallucinations lead to bugs and security vulnerabilities so it's prudent to have programmers check it before submitting to production. An image is an image. It may not be as nice as a human drawn one, but for most cases it doesn't matter anyway.
The AI "stole" or "learned" in both cases. It's just that one side is feeling a lot more financial hardship as the result.
surgical_fire•1mo ago
There is a problem with negative incentives, I think. The more generative AI is used and relied upon to create images (to limit the argument to inage generation), the less incentive there is for humans go put in the effort to learn how to create images themselves.
But generative AI is a deadend. It can only generate things based on what already exists, remixing its training data. It cannot come up with anything truly new.
I think this may be the only piece of technology humans created that halts human progress instead of being something that facilitates further progress. A dead end.
tstrimple•1mo ago
tstrimple•1mo ago
altairprime•1mo ago
danielbln•1mo ago
pseudalopex•1mo ago
JKCalhoun•1mo ago
kcb•1mo ago
expedition32•1mo ago
As always the market decides.