The artistic and thematic reason is obvious. It's a commentary on the show and AI art in general. To ignore this message because it "looks like AI" devalues the entire concept of human art.
He’s reaching to try and find any leeway to say it was AI based on the words the artist used.
But then he ends with “well it sucks anyway”
Dude, just say “my bad” and “it’s not to my tastes”. You don’t need to double down so hard.
The first is there can be 'non-general' cases, and this could be one of them.
The second is a lot of 'standard digital tools' include AI features these days. A person can use generative AI without leaving photoshop.
Most likely, he did exactly that. Made a rough sketch and then achieved the final product either by just handing it to a model (which I'm guessing based on the maze) or he used gen AI tooling in his digital painting software to polish it into a finished painting.
It's not an accident that he's being so cagey and using exactly that verbiage.
False AI accusations are very harmful, especially when so much actual AI slop goes unnoticed.
EDIT: I stand corrected, the show’s analogy to AI is coincidental.
IMO that's fine because art isn't a one way street. Audiences also play a role in how it's interpreted and what happens to it in private and once copyright and trademark expire.
I agree with "art isn't a one way street". But, it's also up to the artist whether people's interpretation is "right" or "wrong". Some artists love when people find meaning in their work that wasn't intentional, and some don't.
One story that's burnt into my brain, is about Ray Bradbury giving a guest lecture. This is meant to be a quote from Bradbury. It's hard to know what's real these days.
From "Listen to the Echoes: The Ray Bradbury Interviews"
"Weller: have you encounted academic misinterpretation of your work?
Bradbury: I was lecturing at Cal Fullerton once and they misinterpreted Fahrenheit 451, and after about half an hour of arguing with them, telling them that they were wrong, I said, “Fuck you.” I've never used that word before, and I left the classroom.'"
I think it's fine to read Fahenheit 451 and have your own opinions about its main theme, but, it's another thing to get into an argument with the author about it.
Bradbury said Fahenheit 451 is about the effect of mass media on society, if it was written today, it might be about the effect of AI on society.
But as far as your "correction" goes, even if the anti-genAI overtones at its inception are a coincidence, it wouldn't be too crazy for someone making related art today to play up the coincidence anyway. So I think your original idea was still a reasonable guess.
1) The maze on the carton can't be solved.
2) Two of the maze walls appear to be smudged, but nothing else in the image is.
3) The floral pattern on the plate is messed up - it repeats, but berries randomly change positions and the leaves change shape.
4) The milk carton has a splotchy texture that's really common in ChatGPT-generated illustrations.
I suspect it's not 100% gen AI - for example, the sharp outline around an out-of-focus xmas tree feels like a human-made artistic choice - but I'd bet it's at least partly composed from AI-generated elements.
And for what it's worth, the image pretty consistently trips gen AI image detectors (e.g., https://hivemoderation.com/ai-generated-content-detection).
It can if you just go around the maze.
I wonder how pissed Apple was when they realized they had paid him a few thousand (going by his posted rates) for "art" that used gen AI to a significant enough degree to have nonsensical components.
>for example, the sharp outline around an out-of-focus xmas tree feels like a human-made artistic choice
If I had to guess, it looks like he did the components of it separately and then just pasted them all in together. For example, you'd expect the light on the plate to brighten the milk carton near it. And the shadow on the plate doesn't line up with the bright light shining on the carton's side. These are just classic sloppy bad artwork mistake.
The fact is we're in a world where algorithms consume more attention than ever. There's more content than ever. We're more sedentary and glued to useless shit on a screen than ever. We consume content like we're pigs, and what do you feed pigs? Slop. I don't like this, and I try to keep my diet as healthy as I can, but it's probably still worse than it was 20 years ago, and most people don't care at all. It's all consumption for the sake of mindless distraction. The slop exists because the demand exists. People will watch the show anyway.
Not only oblivious but actively for it; I know many people who even watch fully AI generated content on TikTok or Instagram Reels. In fact, they know it's AI yet still like it, probably because some can look pretty cool or funny.
What's worth mentioning is that there has always been large quantities of "slop" even with human-generated art; the good stuff has always been a minority.
Breathtakingly beautiful AI-generated videos, with undeniable artistic merit. (Images generated with Midjourney, music generated with Suno).
Notwithstanding your previous points, these sorts of AI detectors flag many false positives, they're not worth relying on. The only one that could actually work is specific watermarking in the images themselves, such as what Google does with SynthID in their generated images.
Being a trained artist myself I don’t think many artists would deliberately make that kind of design choice considering given the overall style of the artwork.
To have a separate conversation from everyone else (who is talking about whether it's real or AI), I think it's interesting to see people's epistemology. If you thought something was X because of A (i.e. P(X|A) > P(X), maybe much greater), your posterior for P(X|A) should be different from your prior in response to the evidence "it was X, but not X also has A" and I think the directionality should be obvious.
For those who don't do that, I should update my adjustment factor to their claims of fact and not in their favour.
(Personally I don't care about my ability to tell the difference between what's AI and what's not; I care about my ability to tell the difference between well-crafted and not, and that seems to be functioning fine)
> is followed by "oh the real props do that"
That would still be true if:
* it was indeed an AI poster
* it was an AI image poster made to look that way
* it was a human-made poster accidentally made that way
* it was a human-made poster intentionally made to look that way
The truth of the show itself could have no bearing on what I was saying. The only thing it does rely on is whether or not the real-world designs did not correspond to the poster image.
As to the content of your post: It doesn't make sense. Thinking something is not human created when it turns out that the real reason was that it wasn't created by a human in the show is not a valid reason to stop applying that as a useful discriminator between AI and human art. It's a Gettier case, but the J part of JTB knowledge still stands, and there's a reason grappling with the Gettier problem is so gnarly in epistemology.
Shark is just a shark. So smart, me!
:)
Such times never existed, you just failed to notice. Flat design took over not because it was pretty, but because it was cheap and versatile. That's why I love the furry community - you can see those people value art as a goal itself, rather than a mean to achieve other (often monetary) goals.
llmslave2•6d ago
I will push back slightly on the idea that slop is only slop because it's bad. AI art will always be slop because of its very nature, it's not even possible to describe ai art as "good" because that's not a quality it can even possess.