I have to wonder if movies will improve or not with AI, because some really stupid franchises have seen stupid amounts of money, while most people barely watch the actually good creative stuff. We're already swamped with unwatchable schlock, but I'm not sure it will improve if we automate it. It's the same people spending the money to make, promote, and distribute movies, the AI doesn't have any money to make a movie or the impetus. But if most people cared about art, creativity, and good storytelling, there probably wouldn't be a race to the bottom in the entertainment industry.
Idiocracy was a documentary, and "ASS" https://www.youtube.com/shorts/kJZjU2k5abs is what the AI will calculate we want to see, and it will win awards.
If you're talking about the kind of movies with big-budget explosions and violence, then no thanks. That isn't what I'm talking about at all. Sure, AI will make that schlock cheaper. A lot of the "indie" stuff is garbage, too.
I feel Hollywood might be the same way.
No comment needed.
According to that Wikipedia page MASSIVE was used for Avengers: Endgame, so it's had about a 20 year run at this point.
If this AI worked without training, no one would say anything.
- LLMs were trained on copy protected content and devaluing the input a worker puts into creating original work
- LLMs are a tool for generating statistical variations and refinements of work, this doesn't devalue the input but makes generating output easier
Form vs Function issues. So it would be preferable to give people a legal pathway to continue making money and own their work instead of allowing their work to be vacuumed up by the people at corporations looking to automate them away. The functional issue still exists but doesn't put your personal work at risk of theft/abuse outside of it's economic intent. Then the social stigma doesn't really matter because "an LLM is just a tool" is now a solid argument not causing abuse or deterioration of existing legal protections.
And on other hand, well I don't really care if someone loses their job if AI can do same job... Our whole way of living is build on efficiency. Artist being replaced with machines is no different from combine-harvesters.
I believe quite a lot would agree on both of those too.
I don’t believe that for one second.
People are rightfully scared of professional and economic disruption. OMG training is just a convenient bit of rhetoric to establish the moral high ground. If and when AIs appear that are entirely trained on public domain and synthetic data, there will be some other moral argument.
Same goes for music. If you need AI and autotune, find another way to earn a living.
But I'll bite. MASSIVE is a crowd simulation solution, the assets that go into the sim are still artist-created. Even in 2003, people were already used to this sort of division of labor. What the new AI tools do is shift the boundary between artists providing input parameters and assets vs. computer doing what its good at massively and as a big step change. It's the magnitude of the step change causing the upset.
But there's also another reason that artists are upset, which I think is the one that most tech people don't really understand. Of course industrial-scale art does lean on priors (sample and texture banks, stock images, etc.) but by and large operations still have a sort if point of pride to re-do things from scratch where possible for a given production rather than re-use existing elements, also because it's understood that the work has so many variables it will come out a little different and add unique flavor to the end product. Artists see generative AI as regurgitation machines, interrupting that ethic of "this was custom-made anew for this work".
This is typically not an idea that software engineers share much. We are comfortable and even advised to re-use existing code as is. At most we consider "I rewrote this myself though I didn't need to" a valuable learning exercise, but not good professional practice (cf. ridicule for NIHS).
This is one of the largest difference in the engineering method vs. the artist's method. If an artist says "we went out there and recorded all this foley again by hand ourselves for this movie", it's considered better art for it. If a programmer says "I rolled my own crypto for my password manager SaaS", they're in incredibly poor judgement.
It's a little like convincing someone that a lab-grown gemstone is identical to one dug up at the molecular level, even: Yes, but the particular atoms, functionally identical or not, have a different history to them. To some that matters, and to artists the particulars of the act of creation matters a lot.
I don't think the genie can be put back in the bottle and most likely we'll all just get used to things, but I think capturing this moment and what it did to communities and trades purely as a form of historical record is somehow valueable. I hope the future history books do the artists' lament justice, because there is certainly something happening to the human condition here.
I agree the magnitude of the step change is upsetting, though.
But yeah, the tension between placing a value on doing things just in time vs. reducing the labor by using tools or assets has surely always been there in commercial art.
There's plenty of reuse culture in movies and entertainment too - the Wilhelm scream, sampling in music - but it's all very carefully licensed and the financial patterns for that are well understood.
I wonder who is making the oss version of these tools? So you can specify all the hundreds of parts needed to just compose a decent framework
Heavy emphasis is on making cutting edge models work with limited local compute.
For example, I wanted help setting up the use of a lora and batch iteration. The LLM can figure out where you've hooked things up incorrectly. The UI is funky and the terms and blocks require familiarity you won't have to start.
I think learning the basics of it this way would be useful because you'll get some positive feedback loop going before trying to make use of someone's shared, complex workflow.
Hollywood might save money on the short run but they are doomed to irrelevance on the long run, because you'll have access to the exact same tools as they do.
Is it good or bad? I don't know, it just is...
It's bad. Look at what social media and cellphones have done to society and human attention spans.
There will be a lot of bad shit that will come out of this that won't truly be appreciated until it's already too late to reverse course.
>One animator who asked to remain anonymous described a costume designer generating concept images with AI, then hiring an illustrator to redraw them — cleaning the fingerprints, so to speak. “They’ll functionally launder the AI-generated content through an artist,” the animator said.
This seems obvious to me.
I’ve drawn birthday cards for kids where I first use gen AI to establish concepts based on the person’s interests and age.
I’ll get several takes quickly but my reproduction is still an original and appreciated work.
If the source of the idea cheapens the work I put into it with pencils and time, I’m not sure what to say.
> “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”
This sounds eerily similar to the messaging around SWE.
I do not see a way past this—-one must rise past prompting and into orchestration.
I don't know how accurate that is.
... and produced the worst AI-upscales of True Lies and Aliens, to universal scorn from audiences.
When it comes to artists, I have less insight but what I see is that they are extremely critical of it and don't like it at all.
It's interesting to see that gap in reactions to AI between artists and tech companies.
We don’t want any of this and are working to build around it.
It’s being really pushed by a lot of the same people who were pushing Web3 and NFTs and blockchain grifts.
Using AI for art is an idiotic proposition for me. If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself. If you don’t enjoy perfecting the sentence, maybe don’t be a writer?
That’s why there’s a disconnect. I make art for personal fulfillment and the joy of the creative act. To offload that to something that helps me do it “faster” has exactly zero appeal.
The same would be true if I were going to use AI to read it. If we just wanted to trade Clif Notes around, why bother with novels at all?
Cyber-Leo-Tolstoy types a three-page summary of "War and Peace" into ChatGPT and tells it to generate an 800-page novel. Millions of TikTok-addled students ask ChatGPT to summarize the 800-page novel into three pages (or a five-paragraph essay). What is the point of any of this?
Anyways, AI generated media is gonna lead to hyper-personalized, on-demand, generated media for people to consume. Sure, hollywood will still be around, but once consumer computing power and the models catch up, there are gonna be a ton of people choosing their own worlds than the ones curated by an industry.
The only way out of this will be HN types who roll their own, and those will probably suck in comparison to the commercial systems filled with product placement and mindblowing amounts of information harvesting.
Takeaway: Maybe AI good. Maybe AI bad. Scary. But possibility. Everybody try.
sxp•3h ago