Though I don't like the Wayland x X11 flamewar, I'm happy to see some modern features are only supported on Wayland.
That may please the crowd that will be able to say "sorry, I can't use x11 because it doesn't support a feature I need" bringing some symmetry to the discussion.
Edit: correction: it is about the development 5.0 version: https://devtalk.blender.org/t/vulkan-wayland-hdr-support/412...
Also, getting the most out of Intel+RTX CUDA card render machines sometimes means booting into windows for proper driver support. Sad but true... =3
Making a feature platform specific to a negligible fraction of the users is inefficient, as many applications will never be ported to Linux platforms.
Blender should be looking at its add-on ecosystem data, and evaluate where users find value. It is a very versatile program, but probably should be focused on media and game workflow interfaces rather than gimmicks.
Best of luck =3
If nothing else, it's better to have some implementation to reference for future platforms than none.
https://www.youtube.com/watch?v=WFZUB1eJb34
Someone needs to write a Blender color calibration solution next vacation =3
Creatives on wacom tablets and Adobe products etc. will exclude the Linux Desktop option. =3
I like Linux (use it everyday), but many CAD, Media, Animation, and Mocap application vendors simply don't publish ports outside Windows.
Most studios have proprietary workflows with custom software. =3
> Making a feature platform specific to a negligible fraction of the users is inefficient, as many applications will never be ported to Linux platforms.
All the large studios use Linux, that's why all the third party software that is used in feature animation and vfx is supported on Linux. So I'm just saying 'negligible fraction of users' in the case of Blender (which as a project would like to increase adoption in professional feature animation and vfx) isn't really true.
Stats are available from the published 2024 feedback data:
https://survey.blender.org/feedback/2024/
Best of luck, =3
Recommended reading:
https://www.poetry.com/poem/101535/the-blind-men-and-the-ele...
Why?
If anything that's a reason to why I wouldn't fully jump to blender.
I have been working on my own hobby game engine for the past 15 years and have been excited to introduce Blender to the workflow. If this is the case I don't like it. Wayland has never work for me the same way as X has.
On a serious note, I do wonder if this Wayland only limitation is something a fork could work around.
At some point I see an LLM more or less integrated into the UI.
At some point I see whole apps written so complexly that an LLM is the required interface.
I have not tried it. Instead I have been asking Claude (etc.) "How do I create a repeating triangular truss..." or what-have-you. And then I follow the steps they list.
Could check out https://extensions.blender.org/add-ons/mpfb/ for low-poly character mesh generation tools. =3
If you are pushing vertices around to make a model, you eventually have to think of topology issues if you ever want to make a good animation.
And then when you have a good topology... Then You need to rework the UV Maps, Normals and other such data so that it's all consistent.
LLMs are not the right tool for this. I'm sure some AI retopology tool can come around but there is a HUGE difference between a model designed for stills, a model designed for animation and finally, a model designed for close up facial animations.
The topology differences alone are insane.
---------
Now maybe in the future an Ubermesh (like the Uber shader / principled shader) can be made to make these processes consistent. And then tooling can be made over the hypothetical Ubermesh. But today it's a lot of manual retopology that requires deep thinking about animation details.
Ex: an anime style mouth doesn't usually move the jawline (Think Sonics mouth: its basically a moving hole that jumps around his face). Meanwhile, western 3D animation (Baldurs Gate and the like) obviously have fully made lip sync including phhfff, eeee and ahhh sounds.
The 'Kind' of animation you want leads to different mesh topologies. It's just the nature of this work.
And people get ANGRY when you ex: animate Sonic with traditional teeth and jawline (see 'Ugly Sonic'). You can't just willy nilly pick one style randomly. You need purposeful decisions.
--------
I guess if you are a pure sculptor / stills then you don't have to worry about this. But many 3d projects are animated, if not most of them. The retopology steps are well known.
LowPoly look will almost certainly require manual effort (these topology problems tend to disappear with more triangles).
People have feelings, and sometimes may misinterpret others intent. I am happy we've met if we both learned something. =3
Waaaaay back in the Quake 3 days I had a pirated copy of 3D Studio Max and decided to try and make a Quake (or maybe it was Half-Life) character model. Found an on-line tutorial that step by step showed you how to setup Max with a background image that you "traced" over. So I grabbed the images of front, back and side views of a human, loaded them into the various view ports, then drew a rectangle from which you extrude a head, arms and legs. Then you manipulate the block headed human mesh to fit the human background image - extruding more detail from the mesh as you go along. In one day I had a VERY crude model of a person. I also found out I dont like 3D modelling. Though I'd say a person who really enjoys it would pick it up in a week or two with good tutorials.
LLM's just cut out the learning part leaving you helpless to create anything beyond prompts. I am still very mixed on LLM's but due to greed and profit driven momentum, they will eat the world.
Even just a coffee a month will help immensely. Please consider supporting :)
Supporting open source creates competition for your paid alternative, so they're forced to give you a better product or a better deal.
It's easily one of the most well-made FOSS projects out there!
You get access to training, assets, source and production logs like the recent Dog Walk update while also supporting Blender:
https://studio.blender.org/projects/dogwalk/production-logs/
I know the industry standard of animation is still Maya, and geometry node isn't as powerful as Houdini, but Blender has made such great progress so far... I wonder if there any hobbyist/student who learns 3Dsmax or 3DCoat as their first DCC anymore?
*: Assuming genAI won't deliver and make that world disappear entirely, ofc.
Blender has always had the perpetual Beta problem, as many boring core design issues were never really given priority. Fine for developers, but a liability in commercial settings.
Houdini is interesting, but with Blender Geometry-nodes now working it is unclear how another proprietary ecosystem will improve workflows. =3
The Entagma channel covers a lot of Houdini and Blender bleeding-edge features with short lab tutorials:
Geometry node is quite a separate thing from what Blender already has. Last time I checked one couldn't even create vertex groups in geometry node, and there was no way to create an armature there either. (Not sure if it changed since I checked though)
The mini labs on Entagma's channel were very helpful for parametric surface operations =3
I found the Entagma lab videos very helpful in understanding how to parse geometry with nodes. It is not obvious (unless you already get CLI pipes), but so worth it when things finally work... =3
This can be solved with paid contributors, but FOSS organization don't have the most funding out there. So it can be challenging when trying to attract specialized talent.
The other problem is people Cowing anyone that may not agree with their personal opinions how projects should mature. QED: Our karma scores... lol =3
These plugins made Blender usable for a few projects, and I have personally found value in supporting for asset creation:
https://tinynocky.gumroad.com/l/tinyeye
https://sanctus.gumroad.com/l/SLibrary
https://polyhaven.com/plugins/blender
https://artell.gumroad.com/l/auto-rig-pro
https://fbra.gumroad.com/l/Faceit (for Unreal face mocap app with iPhone Pro Lidar)
I usually recommend these courses to users when the under $20 sale is active.
They cover a lot of Blenders non-intuitive workflows :3
"Complete Blender Creator: Learn 3D Modelling for Beginners"
https://www.udemy.com/course/blendertutorial/
* Basics of low-poly design in blender
"Blender Animation & Rigging: Bring Your Creations To Life"
https://www.udemy.com/course/blender-animation-rigging/
* Some more practice rigging
"The Ultimate Blender 3D Sculpting Course"
https://www.udemy.com/course/blender-3d-sculpting-course/
* Sculpting, Retopology, and VDM brushes
* basic anatomy
"The Ultimate Blender 3D Simulations, Physics & Particles"
https://www.udemy.com/course/blender-simulations-physics-par...
* Shader/Texture basics
* Geometry node basics
* Boid sprites
* Hair and physics simulation
* Camera FX, and post-render filters
* Instructions on how to export your assets to Unity 3D and Unreal game engines
Closed-source software seems to get... stuck. In the best case. Often, they regress: becoming buggier from version to version with less features. I think of Windows and the entire Microsoft suite of applications.
I think one exception is Gnome. Gnome loves removing features more than they love not implementing popular Wayland protocols.
i.e. of the $3k of community Blender plugins/add-ons we evaluated last year, only around 60% are still functional in stable blender releases. Additionally, many built-in core features like Fracture became broken in 4.x due to API permutations, and getting split into its own module.
In a production environment one must version lock Blender for the project. =3
At one time there was a Houdini-Engine Open Mesh Effect plugin, but no idea if that project survived.
Cheers =3
I get the concern and the reasoning behind it but having spent most my adult life and career in digital content creative tooling (2D, 3D and video), I believe AI will be short-term disruptive but also a long-term net positive. It should make skilled pros faster and more productive while helping non-pros achieve more of their creative vision on their own.
For over 30 years progress in tools has been about giving creative people the power to realize their visions. To create more and do it faster, cheaper and at higher quality. Of course, they don't always choose to use that power well but the concern that "these new tools are just enabling bottom feeders to create more bad content" has remained a pretty constant refrain since I started in 1989. Even back in the 90s I said "Sure, these new tools will enable 95% more crap but they'll also unleash 5% that's great which wouldn't have existed before." I think that's just as true today as it's always been.
But, I do wonder about smaller teams accomplishing more, which the unit of work-doing-people can be much smaller. The org tree can shrink, probably knock out some levels. Maybe a team of 4 that does the work previously done by 10 finds it easier to just have someone directly interface with the customer, rather than needing a layer of project managers and customer service to receive customers messages and distribute them.
I’d be worried if I was… anywhere above an IC really.
Interestingly, I heard similar ratios were being discussed (or feared/lamented) in the mid-90s as computer-based non-linear video editing swept the post-production industry and in live TV production as computer-based all-in-one production switcher, effects, titler systems like the Amiga-based Video Toaster disrupted everything. I still remember hearing about how the Toaster was giving unionized TV stations problems because union rules said the switcher guy wasn't allowed to touch the graphics/titling system or the editing system. So three guys would be standing around one chair depending which tab was up on the Video Toaster screen :-).
Having been around the content creation tooling business for so long has given me perspective. Since at least the late 80s it's been one never-ending disruption. And, despite the constant predictions of jobs lost, today there are far more jobs in content creation than there were then. Of course, the job descriptions, types, skills required, divisions of labor and which roles were more or less valuable have never stopped changing and probably won't. Yes, this ends up displacing some people in some roles. Back in the 90s highly-paid senior video engineers with deep expertise in how to wire up and synchronize analog video systems initially laughed at us "young kids" with our "toy" digital video systems. Then they resented needing to call us in to get computer-based gear installed, interfaced and working. Then they resented that us "new kids" were paid less due to being less senior but becoming increasingly essential. Some of them learned the 'newfangled' digital systems while others didn't and opted to retire or do something else.
Over thirty years later, I'm now the highly-compensated "old guy" with once-invaluable expertise that's depreciating by the minute and proven skills at doing increasingly irrelevant things. The particulars change but the theme remains the same, which is why I don't think the shift to integrating AI in content creation will be significantly different. There will be skills that become less valuable and job types that go away and entire industry sectors which get disintermediated but at the same time new kinds of work, industry sectors and business opportunities will emerge in different places and ways. As always, the new job types won't be 1:1 replacements for the old, and this will cause as much angst as it always has. They'll not only look different and have different kinds of trade-offs, they may even have different business and compensation models.
It's Schumpeter's process of "Creative Destruction" and renewal (https://en.wikipedia.org/wiki/Creative_destruction). The destruction part is fast, loud and shocking as what exists today combusts in flame. The creative renewal part tends to be more gradual, unfamiliar and at first may not even seem related. From time to time, even the definition of "the industry" ends up changing along with the jobs and roles. I'm pretty sure the old-school TV station engineers I learned from wouldn't consider a digital nomad working full-time editing 30 second clips on a mobile device for YouTube, Instagram and TikTok influencers as even being "in the TV business."
The problem is, AI is good enough to replace juniors. That means companies are already cutting positions at that level and some are just itching to ditch intermediates as well once quality improves.
But when juniors and intermediates are all gone... how are the beancounters expecting to get new seniors when the old ones go to retirement or hand in their 2 week notice because they are fed up cleaning up after crap AI?
I just wrote a longer reply to a sibling comment addressing exactly this: https://news.ycombinator.com/item?id=44574895.
Who needs companies? Those sounds like an unnecessary middleman to me.
The current state of things leaves a lot to be desired. I have yet to see a model, or technique that actually tackles real 3D content creation problems, aside from animation of course.
Most solutions are trying to solve things completely orthogonal to what you really need in the real world. They produce bad meshes, if meshes at all, have bad topology or inconsistent point sampling, and are extremely prone to parroting their training data.
It's kind of incredible how far we have come but seriously, to understand a problem space you need to try to use the existing tools. Only then, do new, and useful ideas pop up that solve real issues. But man, nobody in the space seems to have a clue.
Believe me, 3D modelling software is REALLY good at what it does, it is really hard to try and dethrone human and machine working together in these really well thought out pieces of software.
Even if genAI were to kill the cinema/animation back-end industry, I still think 3D software would be needed to continue videogame development, and godot is far behind in modelling/animation capabilities compared to blender. Hopefully they can latch on this in the long term if it happens that AI keeps eating artists lunch.
Tugged away at the end is defining custom cameras through OSL, which is a very very interesting feature, because it allows more physically accurate lens emulation, with dramatically more interesting bokeh than the standard perspective camera model: https://www.reddit.com/r/blender/comments/1kehtse/new_camera...
obdev•8h ago
https://developer.blender.org/docs/release_notes/4.5/