Seems like in 10 years AI will basically make it pointless to use a tool like this at least for people working on average projects.
What do folks in the industry think? What’s the long term outlook?
The fact that it “seems easy” is a great flag that it probably isn’t.
Really no one can predict the future.
You are asking for industry predictions from industry professionals in an industry you know nothing about while assuming a lot about that industry.
Why do you think they should do all the heavy lifting for you?
You might as well ask ChatGPT what it thinks because it seems you already have an idea of what you want the answer to be.
AI coding agents didn't make IDEs obsolete. They just added plugins to some existing IDEs and spawned a few new ones.
It's like 2D art with more complexity and less training data. Non-AI 2D art and animation tools haven't been made irrelevant yet, and don't look like they will be soon.
What blender and other CGI software gets for free is continuity. The 3D model does not change without explicitly making it change.
Until we get AI which can regenerate the same model from one scene to the next, the use of AI in CGI will be severely limited.
Case in point, the key asset in the recent AI Coke ad is still utterly incoherent from shot to shot even after they cherry-picked from ~70,000 generations and did manual touch-ups.
https://www.reddit.com/r/regularcarreviews/comments/1oubl82/...
I recently used WAN to generate a looping clip of clouds moving quickly, something that’s difficult to do in CGI and impossible to capture live action. It worked out because I didn’t have specific demands other than what I just said, and I wasn’t asking for anything too obscure.
At this point, I expect the quality of local video models (the only kind I’m willing to work with professionally) to go up, but prompt adherence seems like a tough nut to crack, which makes me think it may be a while before we have prosumer models that can replace what I do in Blender.
I'm very excited to see the addition of structs and closures/higher-order functions to blender nodes! (I've also glanced at the shader compiler they're using to lower it to GLSL; neat stuff!) Not only is this practically going to be helpful, the PL researcher in me is tickled by seeing these features get added to a graphical programming language.
If you haven't heard of Blender before, or if you think AI will replace all the work done in it, fair enough. But I'd still strongly suggest looking into what it is and how it works.
Always nice to see these updates though, Blender has really come a long long way.
KiCAD was also a meh ECAD FOSS alternative 7-8 years ago, now it is by far the tool of choice for regular ECAD designs. I can see FreeCad getting there by 2030.
It seems like it has lots of capability but still "punch your monitor" levels of difficulty just trying to do the most basic stuff.
Parasolid is powering practically every major CAD system. Its development started in 1986 and it's still actively developed. The amount of effort that goes into those things is immense (39 years of commercial development!) and I don't believe it can be done pro-bono in someone's spare time. What's worse, with this kind of software there is no "graceful degradation": while something like a MIP solver can be useful even if it's quite a bit slower than Gurobi, a kernel that can't model complex lofts and fillets is not particularly useful.
3D CAD is much harder than Blender and less amenable to open source development.
Fornjot has been attempting this: https://www.fornjot.app
It's going to be years or decades before it's competitive though. Also, it looks like they switched to keeping progress updates private except to sponsors, which means I don't actually have any information about it anymore which is sad.
You're on point that there's a tremendous amount of money captures by Autodesk for CAD software that could be better directed at the open source community instead.
Software like OpenSCAD and FreeCAD are obviously not suitable for much commercial work, and have very irritating limitations for hobbyist work, in my mind a big part of that is the UI and Blender has a good and established UI at this point so I'd love to see the open source CAD that provides an alternative to vendor lock in come from a Blender add-on instead of a separate program.
I am no expert but as I understand it the primary difficulty with developing good alternatives to commercial CAD software lie in the development of an effective geometric kernel.
It seems to me that if a developer of an opensource CAD program develops it as a Blender add-on they can effectively outsource the remainder of the development efforts to the Blender community while focus can be made on the CAD kernel itself.
lwde•1h ago
blitzar•1h ago
Uehreka•1h ago
adgjlsfhk1•44m ago
1220512064•27m ago
Now I want to look into it more, but I'd imagine that "Blackbody" and sky generation nodes might still assume a linear sRGB working space.
Uehreka•19m ago
Since people are always asking for “real world examples”, I have to point out this is a great place to use an agent like Claude Code or Codex. Clone the source, have your coding assistant run its /init routine to survey the codebase and get a lay of the land, then turn “thinking” to max and ask it “Do the Blackbody attribute for volumes and the sky generation nodes still expect to be working in linear sRGB? Or do they take advantage of the new ACES 2.0 support? Analyze the codebase, give examples and cite lines of code to support your conclusions.”
The best part: I’m probably wrong to assert that linear sRGB and ACES 2.0 are some sort of binary, but that’s exactly the kind of knowledge a good coding agent will have, and it will likely fold an explanation of the proper mental model into its response.
throwaway290•26m ago
1220512064•17m ago
If you make a color space for a display, the intent is that you can (eventually) get a display which can display all those colors. However, given the shape of the human color gamut, you can't choose three color primaries which form a triangle which precisely contain the human color gamut. With a display color space, you want to pick primaries which live inside the gamut; else you'd be wasting your display on colors that people can't see. For a working space, you want to pick primaries which contain the entire human color gamut, including some colors people can't see (since it can be helpful when rendering to avoid clipping).
Beyond that, ACES isn't just one color space; it's several. ACEScg, for example, uses a linear transfer function, and is useful for rendering applications. A colorist would likely transform ACEScg colors into ACEScc (or something of that ilk) so that the response curves of their coloring tools are closer to what they're used it (i.e. they have a logarithmic response similar to old-fashioned analogue telecine machines).
edflsafoiewq•1h ago
kevin_thibedeau•32m ago
selbyk•25m ago
moron4hire•18m ago