> Notice: This announcement is causing a lot of feedback. We are actively evaluating it.
Presumably a lot of Blender users work in roles that feel threatened by AI being used for computer graphics work.
Lots of negative replies on Blursky here: https://bsky.app/profile/blender.org/post/3mkkuyq3ijs2q
This feels like the proper way to have AI act as a tool to make artist's jobs easier without taking away their creativity?
Edit: I guess they might want absolutely no AI of any sort in their tools (which seems like a strange line to draw), or is it about the data it's been trained on?
The thing is able to code up full pretty competent thousand lines projects in an hour. Even hardcore engineers use it now, as of this year. My senior front end friends already can't find jobs.
You're crazy if you think things won't change dramatically, at the scale of all of society.
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.
AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group?
This sounds extreme until you realize they are under threat of losing their likelihood for good.
They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen.
I don't necessarily agree with this but I do understand it.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
Makes me think that there's some room in the model lineup for one that doesn't do as well on benchmarks, but is trained on "ethically sourced" data (though they'd need to somehow prove that they aren't "accidentally" including other data).
Software tends to be a "living" project, so just vibe coding is not yet fully sustainable for maintaining a project. But with art, the AI just spits out a completed image.
The generated images compete directly with the people the data was sourced from, and there have also been many cases of abuse, eg people using AI to impersonate a popular artist and selling comissions under that artist's name.
The copyright situation for generated imagery is also tricky, so people pretending to be artists only to be sharing work that isn't copyrightable can cause a ton of trouble and financial loss for customers.
Most of these issues don't apply to software in the same way. That's why I was surprised by the backlash to this, I don't see this as threatening artist's work.
Even myself, while I am currently extremely empowered by these tools... I could see my role (PM/builder) disappearing in the next couple years.
I respect you a lot, so if you have a moment, I would really like to get talked down from my take.
Even if you can see how individual circumstances could be beneficial to your workflow, it's a general direction I think many take issue with quite fairly.
yikes, And I thought twitter/x was a cesspool.
The level of entitlement, paranoia, and misdirected rage was unexpected for what I thought was supposed to be a more... sane? alternative to musk's x.
It is a massive SDK though (thousands of functions; feel free to poke around with it; Affinity is free) and so it really shows the ability of LLMs to effectively work across long-horizon tasks massive context windows.
Personally, really interested in Blender though. I'm working on a game as a hobby/side project and I'm very much a newbie / often struggle with learning and using Blender.
There are so many ways these integrations help humans & human creatives; your job and role shouldn't be about how skilled you are with navigating/using a tool, or if you're technically savvy to code scripts to improve your workflow.
Turns out it is possible, one just has to have the script check to see if each level of a given index entry exists or no, then if it does not yet exist, create it before making the next lower level by adding that sub-entry to the one above it.
An LLM is only going to code what it has documented as possible/working and may not be able to do what needs to be done.
I've worked with Claude in many creative capacities and it's issue is that despite it being able to see if you ask it to draw something (using ascii for example) it will fail, if you ask it to iterate on that drawing it will continue to fail and not get any closer to the target then complain about this.
I've felt that these models struggle with anything that cannot be decomposed into primitives and their architecture is too greedy and favours the obvious, autoregressive generation so it will converge to the modal answer. So unless they have enhanced the models in some creative sense I fail to see how this is anything other than giving Claude a bunch of documentation/MCP servers/APIs/CLI tools (which already existed) and making an announcement out of it.
My point: FREE the models, unchain them and let's see what they are actually capable of, also put some damn demos in the announcement post???
hmartin•1h ago
[1] https://github.com/anthropics/claude-code/issues/11447#issue...
ghostly_s•54m ago