Seems like in 10 years AI will basically make it pointless to use a tool like this at least for people working on average projects.
What do folks in the industry think? What’s the long term outlook?
The fact that it “seems easy” is a great flag that it probably isn’t.
Really no one can predict the future.
You are asking for industry predictions from industry professionals in an industry you know nothing about while assuming a lot about that industry.
Why do you think they should do all the heavy lifting for you?
You might as well ask ChatGPT what it thinks because it seems you already have an idea of what you want the answer to be.
AI coding agents didn't make IDEs obsolete. They just added plugins to some existing IDEs and spawned a few new ones.
It's like 2D art with more complexity and less training data. Non-AI 2D art and animation tools haven't been made irrelevant yet, and don't look like they will be soon.
What blender and other CGI software gets for free is continuity. The 3D model does not change without explicitly making it change.
Until we get AI which can regenerate the same model from one scene to the next, the use of AI in CGI will be severely limited.
Case in point, the recent AI Coke ad only needed a single key asset to remain consistent from shot to shot and it's still a complete mess.
https://www.reddit.com/r/regularcarreviews/comments/1oubl82/...
Not for lack of trying either, apparently they burned through 70,000 generated clips and manually composited the Coke logo just to get those results.
I recently used WAN to generate a looping clip of clouds moving quickly, something that’s difficult to do in CGI and impossible to capture live action. It worked out because I didn’t have specific demands other than what I just said, and I wasn’t asking for anything too obscure.
At this point, I expect the quality of local video models (the only kind I’m willing to work with professionally) to go up, but prompt adherence seems like a tough nut to crack, which makes me think it may be a while before we have prosumer models that can replace what I do in Blender.
I'm very excited to see the addition of structs and closures/higher-order functions to blender nodes! (I've also glanced at the shader compiler they're using to lower it to GLSL; neat stuff!) Not only is this practically going to be helpful, the PL researcher in me is tickled by seeing these features get added to a graphical programming language.
If you haven't heard of Blender before, or if you think AI will replace all the work done in it, fair enough. But I'd still strongly suggest looking into what it is and how it works.
Always nice to see these updates though, Blender has really come a long long way.
Many people complain about it being a mesh editor but it works for me. The sheer variety of tooling and flexibility in Blender is insane, and that's before you get to the world of add-ons.
I want to learn Geometry nodes and object generation as I think they will address a lot of the "parametric" crowd concerns. This v5 is meant to be a big step in ease of use of this.
Also, I'm not sure if the different tooling lets me see all the flaws of online "parametric" models. They get frustrating. I have Gordon-Ramsay-screamed "How can you fuck up a circle!".
KiCAD was also a meh ECAD FOSS alternative 7-8 years ago, now it is by far the tool of choice for regular ECAD designs. I can see FreeCad getting there by 2030.
It seems like it has lots of capability but still "punch your monitor" levels of difficulty just trying to do the most basic stuff.
Deltahedra is a great YouTube channel for getting the basics.
Parasolid is powering practically every major CAD system. Its development started in 1986 and it's still actively developed. The amount of effort that goes into those things is immense (39 years of commercial development!) and I don't believe it can be done pro-bono in someone's spare time. What's worse, with this kind of software there is no "graceful degradation": while something like a MIP solver can be useful even if it's quite a bit slower than Gurobi, a kernel that can't model complex lofts and fillets is not particularly useful.
3D CAD is much harder than Blender and less amenable to open source development.
Fornjot has been attempting this: https://www.fornjot.app
It's going to be years or decades before it's competitive though. Also, it looks like they switched to keeping progress updates private except to sponsors, which means I don't actually have any easily-accessible information about it anymore which is sad.
The tricky bit is having a G2 (or even G3) fillet that intersects a complex shape built from surface patches and thickened, with both projected into a new sketch, and keeping the workflow sane if I go and adjust the original fillet. I hope one day we'll see a free (as in speech) kernel that can enable that, until then it's just Parasolid, sadly.
You're on point that there's a tremendous amount of money captures by Autodesk for CAD software that could be better directed at the open source community instead.
Software like OpenSCAD and FreeCAD are obviously not suitable for much commercial work, and have very irritating limitations for hobbyist work, in my mind a big part of that is the UI and Blender has a good and established UI at this point so I'd love to see the open source CAD that provides an alternative to vendor lock in come from a Blender add-on instead of a separate program.
I am no expert but as I understand it the primary difficulty with developing good alternatives to commercial CAD software lie in the development of an effective geometric kernel.
It seems to me that if a developer of an opensource CAD program develops it as a Blender add-on they can effectively outsource the remainder of the development efforts to the Blender community while focus can be made on the CAD kernel itself.
It must be difficult when so much management is short sighted and focused on delivering short term profits for shareholders. Even academia is run like a business now.
Unless a privately held rogue company like Valve got interested its probably going to have to wait for a government/ngo/scientific. Industry, particularly the tech industry, is notorious for leaching of free and open source software and in some cases building entire businesses on it and not giving back.
lwde•1h ago
blitzar•1h ago
Uehreka•1h ago
adgjlsfhk1•58m ago
1220512064•41m ago
Now I want to look into it more, but I'd imagine that "Blackbody" and sky generation nodes might still assume a linear sRGB working space.
Uehreka•32m ago
Since people are always asking for “real world examples”, I have to point out this is a great place to use an agent like Claude Code or Codex. Clone the source, have your coding assistant run its /init routine to survey the codebase and get a lay of the land, then turn “thinking” to max and ask it “Do the Blackbody attribute for volumes and the sky generation nodes still expect to be working in linear sRGB? Or do they take advantage of the new ACES 2.0 support? Analyze the codebase, give examples and cite lines of code to support your conclusions.”
The best part: I’m probably wrong to assert that linear sRGB and ACES 2.0 are some sort of binary, but that’s exactly the kind of knowledge a good coding agent will have, and it will likely fold an explanation of the proper mental model into its response.
throwaway290•40m ago
1220512064•30m ago
If you make a color space for a display, the intent is that you can (eventually) get a display which can display all those colors. However, given the shape of the human color gamut, you can't choose three color primaries which form a triangle which precisely contain the human color gamut. With a display color space, you want to pick primaries which live inside the gamut; else you'd be wasting your display on colors that people can't see. For a working space, you want to pick primaries which contain the entire human color gamut, including some colors people can't see (since it can be helpful when rendering to avoid clipping).
Beyond that, ACES isn't just one color space; it's several. ACEScg, for example, uses a linear transfer function, and is useful for rendering applications. A colorist would likely transform ACEScg colors into ACEScc (or something of that ilk) so that the response curves of their coloring tools are closer to what they're used it (i.e. they have a logarithmic response similar to old-fashioned analogue telecine machines).
edflsafoiewq•1h ago
kevin_thibedeau•46m ago
selbyk•39m ago
moron4hire•32m ago