Seems like in 10 years AI will basically make it pointless to use a tool like this at least for people working on average projects.
What do folks in the industry think? What’s the long term outlook?
The fact that it “seems easy” is a great flag that it probably isn’t.
Really no one can predict the future.
You are asking for industry predictions from industry professionals in an industry you know nothing about while assuming a lot about that industry.
Why do you think they should do all the heavy lifting for you?
You might as well ask ChatGPT what it thinks because it seems you already have an idea of what you want the answer to be.
AI coding agents didn't make IDEs obsolete. They just added plugins to some existing IDEs and spawned a few new ones.
It's like 2D art with more complexity and less training data. Non-AI 2D art and animation tools haven't been made irrelevant yet, and don't look like they will be soon.
What blender and other CGI software gets for free is continuity. The 3D model does not change without explicitly making it change.
Until we get AI which can regenerate the same model from one scene to the next, the use of AI in CGI will be severely limited.
Recent news on major AI scientists starting "world AI" companies confirm this trend.
So 3D soon will become a very important tech even compared to today.
I recently used WAN to generate a looping clip of clouds moving quickly, something that’s difficult to do in CGI and impossible to capture live action. It worked out because I didn’t have specific demands other than what I just said, and I wasn’t asking for anything too obscure.
At this point, I expect the quality of local video models (the only kind I’m willing to work with professionally) to go up, but prompt adherence seems like a tough nut to crack, which makes me think it may be a while before we have prosumer models that can replace what I do in Blender.
A lot of the editing functions for 3D art play some role in achieving verisimilitude in the result - that it looks and feels believably like some source reference, in terms of shapes, materials, lights, motion and so on. For the parts of that where what you really want to say is "just configure A to be more like B", prompting and generative approaches can add a lot of value. It will be a great boost to new CG users and allow one person to feel confident in taking on more steps in the pipeline. Every 3D package today resembles an astronaut control panel because there is too much to configure and the actual productions tend to divvy up the work into specialty roles where it can become someone's job to know the way to handle a particular step.
However, the actual underlying pipeline can't be shortcut: the consistency built by traditional CG algorithms is the source of the value within CG, and still needs human attention to be directed towards some purpose. So we end up in equilibriums where the budget for a production can still go towards crafting an expensive new look, but the work itself is more targeted - decorating the interior instead of architecting the whole house.
I'm very excited to see the addition of structs and closures/higher-order functions to blender nodes! (I've also glanced at the shader compiler they're using to lower it to GLSL; neat stuff!) Not only is this practically going to be helpful, the PL researcher in me is tickled by seeing these features get added to a graphical programming language.
If you haven't heard of Blender before, or if you think AI will replace all the work done in it, fair enough. But I'd still strongly suggest looking into what it is and how it works.
Always nice to see these updates though, Blender has really come a long long way.
Inkscape is good for typing dimensions into rectangles tho
I’ll check out Inkscape as well. I’ve tried using some raster graphics in the past, but I couldn’t type dimensions and had to use the rules and guides with snapping. It mostly worked, but was a bit annoying.
What I'd do is:
- Spreadsheet workbench --> Create spreadsheet (name it "measurements"). (This is optional)
- Switch to Part design workbench --> Create body (name it "layout") --> select XY plane --> Create sketch --> Create Polyline
- Zoom out, start drawing the rooms in your house, approximately to scale.
- Before going into too much detail, add a dimension (select line --> "Constrain Distance") to the first line you draw, so that you can do the rest of your drawing approximately to scale. Then the general shape won't get messed up when you add dimensions to everything else.
- (If you have a photo or picture, you can import that to sketch over).
- Add constraints to match your room measurements, mostly vertical or horizontal distance constraints. Be careful not to overconstrain the sketch. (You can put the measurements directly into the sketch constraints, or you can put them into the top-level spreadsheet, create an alias for each cells, and then set the dimensions to reference those cells).
- Once the rooms are drawn, close the sketch and create a new sketch on the xy plane called "furniture".
- Draw some rectangles for your sofas / tables / etc, delete any horizontal and vertical constraints that get automatically added (they look like little | and _ icons), and instead apply perpendicularity constraints. Dimension your rectangles using only the "constrain distance" tool. Now you can drag them around the room and rotate them freely.
- If you want to make 3D models for these too, create new Part Design bodies for each room and each piece of furniture, create a shape binder referencing the master sketches in the Layout body, and then extrude the sketches using the "Pad" operation.
That's about as much tutorial as it makes sense to pack into a HN comment. If you give it a try, I hope it works out for you!
FreeCad is rapidly evolving and quite a few tutorials are already using the v1.1 dev builds. Pay attention to the version used in tutorials as you can run into trouble following them if you are on an older release.
Back in high school I had extensive experience with AutoCAD R14 (3 years with it, after 2 years of board drafting), and then in college I had some more experience with a couple other packages. But this was all a couple decades ago now.
I found furniture handling OK, but certainly has its rough edges. Good thing is that one can just import 3d models and so create the relevant pieces of furniture themselves; or use the generic boxes that SH3D has, if it's just for 2d space usage.
I did a few office space modellings with it, i.e. to get a feeling of how the space could be used best, and for that I found it quite OK. The result I got compared to the time invested was pretty good for my taste.
https://www.youtube.com/watch?v=IXRpDka6gLI
Strong plus is that you can render views of the finished room in Blender. Big problem is that you first have to learn to use Blender.
But of course built-in intro of Solidworks was a way better UX.
FC is not a program you can just open and start using, especially if you have zero experience with parametric modeling.
If you're serious about design, modeling, engineering, etc., and want to own your own data, it's worth investing the hours to learn it starting from the very basics.
[1] -- https://www.astocad.com/
> Mom, may we have SolidWorks?
> We have SolidWorks at Home.
> <SolidWorks at Home>
This is in contrasti to the example the parent comment brought up, and the one I added: Blender and KiCad do not have this concern; there are free (Or you could say inexpensive) high quality tools in their spaces. This is notably not the case for traditional CAD.
Many people complain about it being a mesh editor but it works for me. The sheer variety of tooling and flexibility in Blender is insane, and that's before you get to the world of add-ons.
I want to learn Geometry nodes and object generation as I think they will address a lot of the "parametric" crowd concerns. This v5 is meant to be a big step in ease of use of this.
Also, I'm not sure if the different tooling lets me see all the flaws of online "parametric" models, or whether I'm being pedantic. They get frustrating. I have Gordon-Ramsay-screamed "How can you fuck up a circle!".
Maybe it is the export or something. I run the 3D toolbox and often models are not manifold.
I see things like two circles in slightly different positions but both are connected in different ways to the surrounding "single" instance model. Things like this mean you end up with "infinitely small volumes". There is no fully enclosed "volume" and so mathematically there is "nothing to 3D print".
As a model this makes no sense to do, and so it irks me.
But clearly the slicer software doesn't care or autocorrects and people make their 3D print happen just fine.
>Mesh formats like stl cannot represent a circle by its position and radius, while a parametric format like step can.
This is where I think the Geometry nodes can help. A node (function) can be used to represent the circle with inputs and outputs set or changed as required.[0]
I have not fully explored this space though and so my "hopes and dreams" may well be as useful as thoughts and prayers...
[0] https://docs.blender.org/manual/en/latest/modeling/geometry_...
I have used it to make quite a few functional prints, with the help of making sure my scene units are correct and a CAD plugin.
If I put some holes in something that are 1mm from the edge, but then I print it and see it doesn't line up and needs to be 1.5mm, in Fusion I can just change one number and it all updates. Doing the same thing in blender would likely be very difficult.
The same thing can be said for resizing whole faces, moving parts, etc... Its all possible and usually pretty easy to do, just takes a different mentality from that of a parametric modeller.
If you're using boolean operations to make the holes just move the hole-cutter. Same method.
Blender is not a perfect tool for creating 3d prints but it is a capable tool.
KiCAD was also a meh ECAD FOSS alternative 7-8 years ago, now it is by far the tool of choice for regular ECAD designs. I can see FreeCad getting there by 2030.
It seems like it has lots of capability but still "punch your monitor" levels of difficulty just trying to do the most basic stuff.
Deltahedra is a great YouTube channel for getting the basics.
And it presents nonsensical problems, like offering to create a sketch on the face of an object and then complaining that the sketch doesn't belong to any object. So you have to manually drag it under the object in the treeview. So gallingly DUMB.
Despite all that, I will wrestle with its ineptitude before giving Autodesk a penny. I get stuff done with it and respect those who give their time to develop it.
> complaining that the sketch doesn't belong to any object
The sketch is by default attached to the "active body". Active Body is a simple, but important concept to understand. Any operation you do, including adding a sketch, is applied to what is designated as the active body. You designate the active body by right-clicking on the desired body in the object pane.
> It suffers from too many "workbenches"
Another understandably common source of confusion. There's the ever-confusing Part and Part Design workbenches.
I think it's best to just ignore Part and use Part Design whenever possible. Part lets you do operations at a more granular level, but Part Design provides a lot more QOL enhancements and is more intuitive. For the vast majority of things, Part Design is more than capable. I would only use Part workbench when absolutely necessary.
You probably understand all of this already. It's directed more towards the reader. I feel the need to defend FC when certain accusations are brought up. It's immensely powerful, capable, and usable. In my case, I can work very rapidly with it - though it's taken some time to arrive here. The project deserves more than just aspersions.
The combo of tracing a bitmap (from a scanned drawing) with Inkscape and then saving the result as SVG to bring into FreeCAD has been a frequent workflow for me. It generally works very well.
To clarify about the "active body" though: This problem occurs even when there's only one active body and the shape upon which you've supposedly draw the sketch is part of it. So why is FC complaining?
If you create the sketch from the Part Design workbench, then it will be added to the active body.
A Body is specifically a Part Design concept, and FC doesn't presume you'll be working in PD, so this makes sense in a way - it works on the presumption that the Sketcher workbench works with other workbenches and not just PD specifically.
One thing to note is that creating sketches from Sketcher and PD is different. Sketcher offers attachment options to faces, edges, etc., while PD only offers to attach the sketch to base plane (XY, XZ, or YZ).
There is a good reason for this also. The reason is that in designing parts, especially complex parts, it is highly discouraged to use faces or edges (i.e., features) as attachment points because it makes your model very brittle against changes.
This is more of a general CAD philosophy than a FC thing. It's better to set where a sketch attaches based on variable values. For example, if you have a cylinder with a height 20 and you want to attach a box to the top of the cylinder - rather than attaching to the top face of the cylinder, it's better to create a variable h=20, and set the cylinder to height h, and set the box's z-value also to h.
In FC, I use VarSets for this. I used to use the Spreadsheet workbench, but found it clumsy.
Your comment also serves as an excellent illustration of what's "wrong" with FreeCAD, though.
"One thing to note is that creating sketches from Sketcher and PD is different. Sketcher offers attachment options to faces, edges, etc., while PD only offers to attach the sketch to base plane (XY, XZ, or YZ)."
OK, but I would argue that sketching functionality should still be centralized. So if you have a body or some appropriate object selected and invoke the sketcher (or vice versa), attachment to faces, edges, etc. will be enabled. Otherwise it's disabled.
That's standard GUI, and it's well-understood that greying something out tells the user that some condition isn't met. But he still learns that the option exists and where it resides.
"In FC, I use VarSets for this. I used to use the Spreadsheet workbench, but found it clumsy."
Thanks for the tip. I've been meaning to tackle spreadsheets as a means to resize stuff (another pain point with seemingly many "solutions" in FreeCAD).
MangoJelly has done an amazing job in churning out high quality tutorials for FreeCAD: https://www.youtube.com/watch?v=t_yh_S31R9g&list=PLWuyJLVUNt...
(this is just one playlist, there's a lot more on his channel).
Also second the MangoJelly tutorials. You will have a much better time if you walk through a few lessons first as opposed to just winging it and expecting to understand how everything works immediately.
I'm sure I could grind harder and learn more and make FreeCAD work, but I'm not sure why I'd bother.
I mostly design functional 3D prints. I've found FreeCAD 1.0 fixed most of the annoyances I ran into and I'm pretty productive with it. But, I didn't come into it with an expectation of a SolidWorks or Fusion clone. I learned the tool with its own idioms and it seems pretty straightforward to me. It's not perfect by any means and I've run into the occasional bug. To that end, I've found reporting bugs with reproducible steps goes a long way to getting things fixed.
I'm not sure what it is about CAD in particular, but I find everyone wants the "Blender of the CAD world" while skipping over the decade of investment it took to get Blender where it is. For a long time, discussions about Blender were dominated by complaints about the UX. If we didn't have folks willing to work past a hit to productivity in order to make an investment into Blender, we wouldn't have the amazing open source tool we have today. FreeCAD has all the expectations of a high quality open source CAD tool with hardly any of the investment. Just getting people on /r/freecad to file issues is surprisingly challenging.
By all means, if you're happy with Fusion and don't mind the licensing, have at it. I'm sure there's functionality in there without an equivalent in FreeCAD. I'd personally rather not have my designs locked up in Fusion and see FreeCAD as the best option for me, even if it suffers from the challenges of open source UI design.
Parasolid is powering practically every major CAD system. Its development started in 1986 and it's still actively developed. The amount of effort that goes into those things is immense (39 years of commercial development!) and I don't believe it can be done pro-bono in someone's spare time. What's worse, with this kind of software there is no "graceful degradation": while something like a MIP solver can be useful even if it's quite a bit slower than Gurobi, a kernel that can't model complex lofts and fillets is not particularly useful.
3D CAD is much harder than Blender and less amenable to open source development.
Fornjot has been attempting this: https://www.fornjot.app
It's going to be years or decades before it's competitive though. Also, it looks like they switched to keeping progress updates private except to sponsors, which means I don't actually have any easily-accessible information about it anymore which is sad.
The tricky bit is having a G2 (or even G3) fillet that intersects a complex shape built from surface patches and thickened, with both projected into a new sketch, and keeping the workflow sane if I go and adjust the original fillet. I hope one day we'll see a free (as in speech) kernel that can enable that, until then it's just Parasolid, sadly.
I read a couple of your CAD comments, what an interesting space. It never occurred to me how complex computing 3d geometries is.
Can you help me understand why this problem is so hard?
This is already complex and fiddly enough. Just having a stable 2D drawing environment that uses a constraint solver but also behaves predictably and doesn't run into numerical instability issues is already an achievement. You don't want a spline blowing up while the user is applying constraints one by one! And yet it's trivial compared to the rest of the problem.
Having 3D features analytically (not numerically!) interacting with each other means someone needs to write code that handles the interactions. When I click on a corner and apply a G2 fillet to it, it means that there's now a new 3D surface where every section is a spline with at least 4 control points. When I then intersect that corner with a sphere, the geometric kernel must be able to analytically represent the resulting surface (intersecting that spline-profiled surface with a sphere). If I project that surface into a sketch, the kernel needs to represent its outline from an arbitrary angle — again, analytically. Naturally, there is an explosion of special cases: that sphere might either intersect the fillet, just touch it (with a single contact point), or not touch it at all, maybe after I made some edits to the earlier features.
Blender at its core is comparatively trivial. Polygons are just clumps of points, they can be operated on numerically. CAD is hell.
Now, generally speaking, in a CAD model most surfaces will be “analytic” (plane, torus, conical, arc, line, etc). But whenever some complex surface that joins these surfaces is required, (NUR)B-splines are the principal technique for “covering” the gap.
Firstly, you probably have a variety of analytic shapes to represent — things like lines and circles in 2D or cubes and spheres in 3D. Even seemingly simple questions, like whether two such shapes intersect or not, can require a significant amount of logic to calculate the answer. That logic will often be specific to the exact combination of shapes you have, because the number of freedoms and nature of any symmetries in the shapes you’re working with can mean you would use completely different algorithms for superficially similar situations.
Secondly, while you’re probably going to implement a lot of analytic calculations, in realistic models you’re probably going to end up using numerical methods as well. That can be because you need to work with geometry like Bézier curves or NURBS surfaces that has many freedoms. It can be because even if you start with convenient analytic shapes, new geometry that you derive from those shapes, for example by offsetting a single shape or by combining details from multiple shapes as in constructive solid geometry, won’t in general have an analytic shape itself.
By the time you allow for the numerous different types of constraint that you might want to enforce between different types of geometry and the numerous different ways you can construct new geometry from geometry you already have, the scale of the problem explodes. And on top of that, almost everything you do is going to have numerical sensitivity issues, and all but the simplest algorithms are going to need detailed, careful analysis to make sure you really have covered all the possibilities. In this field, “edge case” and “corner case” are literal terms and not just figures of speech!
To give a practical example, without looking up how to do it, could you confidently calculate whether two arbitrary cuboids are completely separate or they touch or intersect somewhere? As another example, given an arbitrary parametric surface, a sphere in a position just resting on that surface, and the constraint that the surface of the sphere must remain tangent to the parametric surface without intersecting it anywhere, how would you calculate the path the centre of the sphere will follow if you introduce gravity to start the sphere rolling in a certain direction along the surface?
These are relatively simple problems in the field, but each already has some subtlety that leaves the “obvious” solutions incomplete. Solve a few thousand problems like that, each unique and with its own calculation strategy, and now you’re starting to get a practically useful geometric modelling system. (You’ve also probably had a team of dozens of mathematicians and developers working on it for decades.)
So it doesn't really represent meaningful progress towards FOSS CAD because ultimately it uses the same proprietary, expensive library to do the heavy lifting as most of its competitors.
You're on point that there's a tremendous amount of money captures by Autodesk for CAD software that could be better directed at the open source community instead.
Software like OpenSCAD and FreeCAD are obviously not suitable for much commercial work, and have very irritating limitations for hobbyist work, in my mind a big part of that is the UI and Blender has a good and established UI at this point so I'd love to see the open source CAD that provides an alternative to vendor lock in come from a Blender add-on instead of a separate program.
I am no expert but as I understand it the primary difficulty with developing good alternatives to commercial CAD software lie in the development of an effective geometric kernel.
It seems to me that if a developer of an opensource CAD program develops it as a Blender add-on they can effectively outsource the remainder of the development efforts to the Blender community while focus can be made on the CAD kernel itself.
It must be difficult when so much management is short sighted and focused on delivering short term profits for shareholders. Even academia is run like a business now.
Unless a privately held rogue company like Valve got interested its probably going to have to wait for a government/ngo/scientific. Industry, particularly the tech industry, is notorious for leaching of free and open source software and in some cases building entire businesses on it and not giving back.
Management just reacts to environments created by governments. When ZIRP was around money was very easy to get hold of - too easy. Now it's really hard because businesses have to beat government bond interest rates, which are guaranteed, to get debt/investment.
> Unless a privately held rogue company like Valve
Valve is not a rogue company.
> Industry, particularly the tech industry, is notorious for leaching of free and open source software and in some cases building entire businesses on it and not giving back
Your premise is wrong. It's impossible to leach off something that is freely given. This is like being angry because people don't all tip a street performer. The deal is it's free.
And your facts are wrong. Businesses fund a giant amount of OSS work.
OCC is the domain where FreeCAD's biggest limitations (fillets, chamfers, draft, thickness) are found, and the design of its API is part of why the topological naming problem was so difficult to mitigate.
You need more than a couple of good devs to solve this, or CapGemini would have. CAD kernels are one of the hardest possible things to write in a way that is bug-free, which is why there are so few of them.
I think it is possible that OpenCascade will get more attention because of EDF (french multinational power company) and the French atomic energy commission's involvement in SALOME. Things do seem to be slightly improving.
It would be interesting to see if they would license that out further for some amount of money.
https://github.com/sandialabs/sgm
Originally open-source, but since taken back in-house. As I understand, which should not be construed as an accurate accounting, Sandia wants to flesh out the basics further before (potentially) open-sourcing it again.
Some of these issues are long standing and really hard to solve. Someone could probably defend an entire PhD thesis on “redesigning the topological representation to eliminate seam edges” without making much practical progress
OpenCascade’s kernel forces you to deal with periodicity in topology (the shape structure), while Parasolid handles it in geometry (the math). A cylinder is mathematically continuous because there's no actual "seam" where it starts and ends. But in OpenCascade there’s a seam from 0 to 2π and this seam edge becomes a real topological entity that every algorithm has to deal with.
In Parasolid the cylinder is periodic so when you query a point at U=2.1π, the kernel just knows that's equivalent to U=0.1π. The periodicity is a property of the surface math, not the shape structure. It’s not using polygons/edges/vertexes but a system of equations to calculate the surfaces.
This is why boolean ops fail so often in FreeCAD: it’s asking the kernel to intersect with an artificial edge that shouldn't exist. The seam creates edge cases in intersection calculations, makes filleting near seams a nightmare, and complicates things. Parasolid's implicit handling requires smarter surface evaluation code upfront, but then everything else just works.
I can't recall a single CAD system which did this differently. Has modern solidworks figured this out?
The algorithms it enables are fundamentally more capable and robust than traditional kernels based on linear algebra (vectors and matrices). You can do really fancy things like interpolating in space and time robustly, find extrema in high-dimensional phase spaces, etc...
This could potentially allow straightforward and robust solvers for kinematics, optimal shape finding, etc...
Every few decades there's a "step change" where some new algorithm or programming paradigm sweeps away the old approach because suddenly a hobbyist can do the same thing solo that took dozens of developers a decade in the past. I suspect (but cannot prove) that PGA is one of those things.
Previously discussed here:
https://news.ycombinator.com/item?id=30597061
My thinking is to approach the problem from a fundamentally different angle. There's already constructive solid geometry (CSG) kernels, triangle mesh kernels, and NURBS-based kernels. Their mathematical foundations are very different, which results in wildly different behaviour and capabilities.
I came across PGA while studying physics, saw some vaguely CAD-like CSG demos and I realised that it could be yet another mathematical foundation on top of which CAD applications could be built.
Notably, variants of GA and PGA are already used in robotics, inverse kinematics, etc... including 5-axis milling, so it's not unheard of in industry. However, it's always used as a "spot" solution to work around a problem such as gimbal lock, or interpolating transformations. Typically by converting back-and-forth between linear algebra representations and some variant of GA temporarily. I'm thinking of using PGA throughout as the foundational geometric elements.
In general, GA and its variants like PGA are good at calculating tangents and offsets for points, lines, planes, spheres, and other parametric surfaces. The key benefit is that the exact same algorithm will work in 2D, 3D, 4D, or any higher number of dimensions. This could in principle allow very elegant formulas for tasks such as "find the biggest thing that won't hit this other thing at any point in time" or the same thing but with a 0.1 mm gap. Similarly, it's possible to reuse the same code and "human intuition" in very high dimensional spaces, such phase space used in more complex motion planning or engineering simulation scenarios.
Even just the pictures on the cover might give you an idea: https://www.amazon.com.au/Projective-Geometric-Algebra-Illum...
take a look at https://Plasticity.xyz. It's not open-source, but it's got a small, highly dedicated team behind it. It's built on Solidworks' kernel, so it's quite robust.
Also take a look at solverspace, caligula, FreeCAD, ...
Hard nope.
Restrictions are always annoying but I think they're striking a reasonable balance.
I was going to suggest solvespace. It is very barebones, but was much easier to use than FreeCad for me. It also has constraints in 3D space, which I use a lot: https://solvespace.com/index.pl
I'd like to hear someone's perspective on how difficult it would be to unify OpenUSD and CAD file formats so that they are portable between programs?
Speaking with over a decade of experience as a developer in industrial CAD (but still just one random guys point of view only). The question _isn't_ about the availability of a 3D kernel.
3D kernel is not the "moat".
You can cross that with money.
You can purchase a ACIS or Parasolid and you are off to the races. Or even use OpenCascade if you know what you are doing.
The more interesting question is: Ok hotshot, you have a 3D kernel, 10M of investor money (or equivalent resources).
What's your next move? What industry are you going to conquer? What are the problems you are going to solve better than the current tools do?
What's the value you provide to the users except price?
What are you going to do better than the incumbent softwares in relevant specific design industries?
Which industry is your go-to-market?
Etc etc.
The programmer's view is "I will build a CAD". The industrial user on the other hand does _NOT_ want a "general CAD software".
They want a tool with a specific, trainable workflow for their specific industrial use case.
So "if you build it and they will come" will require speaking to a specific engineering/designer audience.
You can of course build a generic tool (it's all watertight manifolds in the end) but the success in the market depends on the usual story about market forces. What's your go to market/beached. Does it enable you to move to other markets? And the answer usually is - NO. You need to build the market share in _each_ domain separately.
” rest of us could be "off to the races" without the upfront cost ACIS or Parasolid.”
Just start coding your CAD tool dude if you want to! You can totally start with OpenCascade.
All these kernels do is they 1) define tensor product surfaces (usually NURBs) with trimming curves 2) join these surface patches to watertight solids 3) export STEP (non-AM) or STL (AM)
The question is which hardware manufacturing path is the one you know so well you know what the important things are?
And the dirty secret here is that the 3D is _not_ the hardest part (it’s hard sure). You soon end up with schema management and database management as well as domain model challenges that are quite complicated. The main problem here will be that you can’t just start coding without knowing where you end up. You need to know _every step_ or you will end up in a quagmire that is no better than existing tools (slow, buggy). It’s totally doable but the total complexity will be non-trivial and the decisions you make year 1 will come haunt you and bite you in the ass year 5. Be prepared!
And for those coming outside to this - I’m not being glib! A 3D design tool totally can be a one man show to start with (see NomadSculpt, Plasticity or MagicaVoxel for example).
” I guess what I mean is that if the likes of huge defence contractors / automotive industry etc…”
I’m not sure what you are getting at.
These companies created what we call CAD starting from the 1950’s or so (research Parson’s NC work at MIT, Bezier’s work later at Renault if you want a wikipedia tour of the early years).
The current industry isn’t shaped like this because Dassault, Siemens and Autodesk decided to go all incumbent. They were more or less pushed to do so by market forces and industrial history (and if it was not them some other company would be the incumbents).
Several open source CAD packages use it(FreeCAD, Cadseer, Dune3D, SALOME Shaper, KiCAD, CadQuery/Build123D, DeclaraCAD etc.)
There are firms using it for bespoke naval and aerospace applications.
The reality is CAD kernels are incredibly difficult, multi-decade projects, and part of the issue is a lot of the domain expertise in them is literally dying off or retiring.
Well, they are complex but not impossible. I understand for example zoo.dev just wrote their own.
I also understand tinkercad and womp wrote their own geometry solvers.
Not sure how to compare the above with OpenCascade (since, like you said, it’s pretty complex & public and the others are not public) but at least one can accurately say ”you can absolutely write a 3d modeler from scratch”.
The further you get from a "general purpose" tool toward a highly specialized yet "trainable" one, there's a very perceptible drop in quality. We're talking about god-awful software here, where if you click the wrong button, the whole thing crashes, or you finish clicking the 30 different things in 4 different "wizards" and it still doesn't work. And all for the low-low cost of about 0.25-1.25x the engineer's salary/seat (subject to audit)!
Just having a solid, open-source framework to build upon would have many positive externalities across the supply chain, similar to how open-source systems software allows innovation across the computing supply chain. There would be no AWS without GNU/Linux.
I would argue they do. What we call ’CAD’ today - digital design of surfaces for manufacturing - originated in aerospace and automotive industries.
What they _want_ is to manufacture.
To manufacture they need designs as input.
The designs ultimately end up as a) drawings b) surface models for toolpath programming c) 3D models for project coordination, validation, etc
They don’t actually care how many buttons you need to press as long as the final design is fit for purpose.
I agree CAD software is stereotypically not fun to use.
Also - there is a non-trivial population of CAD users who actually take pride in their skill to use these more or less broken tools. It’s some sort of weird masochistic/macho badge of honor.
My guess is optimizing the cost of design is not that interesting to anyone as long as a,b and c from above are fullfilled.
The engineers labour who needs to use the CAD tools is an insignificant percentage of the total cost in any case.
To understand why you need to dive into to the cost structures and value creation mechanisms as well as the culture in hardware.
To start with, in many fields the overall culture in hadrware is ”we sell the same junk as everyone else”.
”Just having a solid, open-source framework to build upon”
What’s missing from OpenCascade?
” There would be no AWS without GNU/Linux.”
Sure there would. Open source more or less copypasted existing industrial/academic patterns. But open source probably makes it cheaper and better.
There would no AWS without internet. Both as the protocol but also as the billions invested to the fiberoptic cables crisscrossing the worlds oceans.
The world is built on hardware. Hardaware is not _stupid_. But it’s _different_ than software.
To revolutionize CAD you first need to have deep understanding of the _hardware_ value chains and industrial methods. I think what makes it hard is _not_ having a kernel but having this cross discipline knowledge inside single org.
It would be like if Oracle, Broadcom, and Computing Associates owned 99.9999% of the software market.
Give good open-source tools to the people who are actually making the hardware, and there will be supply-chain-wide positive externalities.
Weeelll … it’s a 70 year old field if you start counting from NC. It’s also possible most of the low hanging innovation fruits were picked long ago. Also innovation required decades of _really hard work_ in applied maths (see nurbs or catmull-clarke surfaces*).
I’m not saying there isn’t room for innovation!
But we also have the examples of several other types of applications clearly reaching maximum utility. Photo editing around Photoshop 6.0 or so, word processing like decades ago, spreadsheet like decades ago.
Sometimes things _do_ seem to have their optimum form.
I’m not saying FOSS CAD would not have a value! But it’s not obvious to me it would be a vehicle for innovation.
I guess better, more easier to edit and robust surface presentation would be valuable if someone came up with them. But that’s a field that requires generally decades of applied research to first find a better model and then a decade of productizing the innovation.
So since you need to keep industrial mathematicians on board without certain outcomes it’s a bit same as medical research - need continuous investment, careers, etc. For this reason it’s not obvious to me there would be an open source framework that would work here predictably.
Again, I’m not saying you are wrong. Maybe there are simple low hanging fruit just waiting to be picked. But I’m not sure you are exactly right either.
* Catmull-Clarke surfaces are a great example. Ed Catmull founded Pixar decades after inventing the surface, but it took a decade of industrial research inside Pixar before they became a usefull tool (Geri’s Game). Toy story was still NURBs, somewhat ironically.
You can - and should be able - to export to a universal format though.
But having a universal format is different than having a universal design space.
The requirments of a mechanical engineer are quite different from that of a structural engineer/and/or detailer for houses. And again different from those of a doctor planning a surgery based on CT model. For example you need rebars only in one of these. You need delicate fillet control only in one of these. You absolutely need support for import and visualization of volumetric data in only one of these.
What I’m getting at that while all design softwares have some common min set of features which _can_ be universal, the number of features in each stereotypical domain are surprisingly disjoint even if only comparing AEC and mech eng. Hence ”universal” design software would be a union of a very, very large set of totally unrelated features. Which suggests it would be hard to develeop, hard to use and hard to maintain.
So it’s better to have a collection of applications that aspire for ”universal scope” as a collection rather than ”one app to rule them all” which you will never get done in any case.
If we presume a hypothetical FOSS mission to enable computer design for all major fields benefitting from digital design for physical outcomes, it should then focus on this ”common min” core, interoperability (strongly linked to the common min core but separate concern - ie import and export) as well as domain specific projects of producing the domain specific UI and tooling.
With that in mind, I'd be interested to hear your thoughts on what a practical FOSS implementation of such a framework might look like. Or at least a FOSS alternative to Fusion 360. Would you use ready-made geometric kernels, improve on existing ones (OpenCASCADE?), or start from scratch? Would you adapt to existing standard formats (import/export), or go with a new one? Would you build on FreeCAD, use another suite as a basis (source code if FOSS, or inspired UX/workflow if not), or do you see no point in that and think it would be better to start fresh? I was rather expecting a discussion along these practical lines.
Thanks for your perspective.
Who is the expert user? What are they building? What are the upstream and downstream application?
See, this is why this area is hard from software pov. It _looks_ like well specified engineering space - something like a network protocol - but actually what you have the engineering happens _in the engineers heads_ and in the organizations that use these tools and the tools that facilitate parts of the process and automate things that are practical to automate. All cad tools are closer to an excel sheet than a single well formed abstract syntax tree like a language grammar.
Now, Fusion 360 narrows down the audience quite a bit but also goes outside of my core expertise (which was in AEC). So I don't have good, detailed off-the cuff opinions here.
I can tell you what the _outputs_ are though. CAM (toolhead planning for CNC or slicing for AM), drawings, and 3D models for project coordination.
So, the question becomes - which of these workflows are we talking about. All of them? And for whom?
I know you specified "Fusion 360" but that is a product that is designed from the point of view of being a vendor-lockable commercial offering. It's really great there. I'm not sure the same package makes sense in FOSS sense.
"Would you use ready-made geometric kernels, improve on existing ones (OpenCASCADE?), or start from scratch?"
If one wants to export STEP then definetly use OpenCASCADE. If additive manufacturing is the target then STL or 3MF suffices and I would use Manifold library there as much as possible. 3D kernel is not the hardest part or even the most important (even thought it's hard and important).
If working in AEC then IFC export/import is a must (it's a schema extension on top of STEP).
"Would you adapt to existing standard formats (import/export)"
Standard formats if you want anyone to use the software for anything, ever.
"Would you build on FreeCAD, ... or do you see no point in that and think it would be better to start fresh?"
I would figure out first what the target user needs. Since CAD programs live in living, breathing industrial design ecosystems you can't really design one in isolation. Without knowing what the user needs and does you really can't answer that question!
If the aim is to offer a credible alternative to Fusion 360, then what you need to do is to make contact with an engineering office. Then you find their CAD manager, and figure out what their organizational parameters are for the CAD workflow. Does FreeCAD work for them? Why not?
If it turns out FreeCAD is perfect for their workflow then it's very likely there are other offices like that, and the FOSS project becomes just about FreeCAD support, education and evangelism.
And actually the key thing might be to design a process how to move the years and years of ongoing project data and models to this new platform. Industrial CAD is super sticky because you have decades of project data, billions of dollars of investment, and hundreds of peoples daily processes being supported by the specific quirks and features of these software.
Personally I'm _skeptical_ FreeCAD would be a drop-in replacememt but if my industry years taught anything is you need to _see what the user does_, analyze their workflow to first principles, then understand how to serve them.
Of course it would be _more fun_ to start from scratch. But the concept is not positioned as expression of personal creativity but pragmatic allocation of hypothetical FOSS investment with the intent of increasing industrial FOSS use and that's a _different_ thing than having a fun personal project.
Now, the above was from the point of view of "offering a credible industrial platform".
If the idea is not to offer a commercially credible alternative, but just to support something like hobbyists workflows for 3D printing, that is a totally different problem to solve, much more simpler, and likely much more fun.
To be honest this domain is sort of an obsession mine and I'm thrilled to discuss it. Hope my monologue was helpfull!
To expand slightly.
There are common parts and stuff that is totally different.
For example the core parts in graphics and geometry are always familiar wether you work in games, vfx or what ever CAD industry.
But then all the wrapping around those core concepts vary quite a lot, and in terms of mass of complexity and code are drowned by all the domain specific stuff.
A viable OSS alternative, particularly one that prioritizes simplicity and being a gentle on-ramp for hobbyists, would be fantastic.
It wouldn't need (and I would argue shouldn't attempt) to compete with the big for-profit outfits to be useful either. Offering a simplified UX for the most-used features of the pro software would have a ton of utility, while also being a great place to build the foundational skills you need in order to master the more complex stuff.
Furthermore, a project with the mission of complementing the pro tools rather than replacing them would probably be far more likely to succeed, IMO. As long as projects could be exported to variety of formats and brought into some other software when a specific use-case arises, you'd have all your bases covered.
That said, I use FC as my main CAD driver and, not only tolerate it, but enjoy using it. I had to watch several hours of introductory videos to get the hang of things initially, but now I'm quite fast and proficient.
The initial pains and common complaints about UI and such, are basically non-issues for me now and when I model, my cognitive energy is basically devoted to the design problem itself rather than issues with UI or the behavior of the software.
It's necessary to put the time into learning it, but it's worth it.
Radial tiling my beloved, and a seemingly far more straightforward array modifier <3 Faster volume scattering for non-homogenous volumes.
For those wondering "where the AI is", the new Convolve Node might be it :) Convolutions are a pretty generic signal processing operation (Hadamard product) which are also used in neural networks which work with images. Realistically though, this will be mostly useful for wonky hand-crafted blurs.
The new sequencer looks fantastic, too. I always went to DaVinci Resolve but I might be able to go full blender. Compositor modifiers in the sequencer is also very welcome.
This is incredible for me.
Meanwhile, in other niches, Microsoft Office still beats open source office suites like LibreOffice; Photoshop isn't about to give up its crown to GIMP; Lightroom isn't losing to Darktable; and FreeCAD isn't even in the rear view mirror of Solidworks.
I wonder what will be the next category of open source to pull ahead? Godot is rapidly gaining users/mindshare while Unity seems to be collapsing, but Unreal is still the king of game engines for now. Krita is a viable alternative for digital painting.
OBS is on line 2 ....
https://www.blender.org/user-stories/japanese-anime-studio-k...
Nobody talks about how Linux dominates the server space anymore. Nobody talks about how “git is winning” or getting “battle tested”. These are mundane and banal facts.
I don’t believe the same has happened to Blender yet.
https://www.youtube.com/watch?v=ZgZccxuj2RY
https://www.blender.org/user-stories/making-flow-an-intervie...
tl;dw: probably.
You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
And these "most people" who are scared of a Python API? Weak! It should have been a low level C API! ;-)
I wouldn't frame it as "scared". The issue is that at a certain scene scale Python becomes the performance bottleneck if that's all you can use.
> You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
This is fine if you only ever have one show in production. Most non-boutique studios have multiple shows being worked on in tandem, be it internal productions or contract bids that require interfacing with other studios. These separate productions can have any given permutation of DCC and plugin versions, all of which the internal pipeline and production engineering teams have to support simultaneously. Apps that provide a stable C/C++ SDK and Python interface across versions are significantly more amenable to these kinds of environments as the core studio hub app, rather than being ancillary, task specific tools.
If the company is more than a boutique shop, I would expect them to have a somewhat competent CTO to manage this kind of problem - one that isn't specific to Blender, even!
Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
Small timers, boutique shops, and humble folks like me just try to get by with the tools we can afford.
On a related note, though: I built a Blender plugin with version 2.93 and recently learned it still works fine on Blender 4. The "constantly changing API" isn't the beast some claim it is.
Considering productions span years, not months, artists would never get to use newer tools if studios operated that way. And it really only works if shows share similar end dates, which is not the reality we live in. Productions can start and end at any point in another show's schedule, and newer tools can offer features that upcoming productions can take advantage of. Each show will freeze their stacks, of course, but a studio could be juggling multiple stacks simultaneously each with their own dependency variants (see the VFX Reference Platform).
> Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
That would be the ideal, something that can be difficult to achieve in practice. You'll find small teams of quality engineers overwhelmed with the sheer volume of work, and other larger teams with less experience who don't have enough senior folks to guide them. The industry is far from perfect, but it does generally work.
> But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
And back to reality XD
That being said a number of studios have been reducing their Autodesk spend over the past few years because it's honestly a sick joke the way the M&E division is run. It's a free several hundred million a year revenue earner, but they foist the CAD business operations onto it and the products suffer. Houdini's getting really close, but if another AIO can cover effectively everything in a way that each team sees is better, you will start to see the ramp up of migrations occur. Realistically this comes down to the rigging and animation departments more than any other. But Maya will never go away completely as it'll still need to be used for referring to and opening older projects from productions that used it, beyond just converting assets to a different format. USD is pretty much that intermediary anyways, it's the training and migration effort that becomes the final roadblock.
Personally, I think they pale in comparison to the original series and lose a lot of what makes Eva special and interesting to begin with, so I'd kinda love to dump on them a bit, but... it's about as big of a production as it gets in the anime industry. They're of course nowhere near Pixar level or similar, but it is clearly an example of Blender being battle tested by a serious studio on a serious project.
So while Maya is currently the standard, I don't believe that it's growing. It'll probably be around still in 20 years, with lots of studios having built their pipelines and tooling around it, with lots of people being trained in it, and because it's at the moment still better than Blender in some aspects like rigging and animation (afaik).
And also, how can you say Blender is not battle-proven? I mean, the big studios use Maya like fortune 500 companies use Microsoft Windows - doesn't mean Linux isn't battle proven.
I haven't used blender much. It's too focused on animation. I mainly make more engineering style things for 3D printing. Even though it can technically do that, the interface just rubs me the wrong way somehow. And i can use Fusion 360 for free.
I have used Blender for 3D printing, though. Architectural design, as well.
The interface is a rub at first, and then again after you get used to it and have to use something else. :-)
Those are different niches. Not even Apple has managed to budge Window out of corporate environments, though it is a lot more present than 20 years ago.
I imagine it's similar with Blender and Maya: do they fill the exact same space? Or is Blender adopted by different types of companies (probably smaller)?
You asked the right questions, IMO, and I think they are self evident.
> do they fill the exact same space?
No.
> is Blender adopted by different types of companies?
Yes.
A company or institution that has money to burn will opt for more "professional" suites (Maya, Microsoft, ...). Smaller entities will use the cheaper alternatives (Blender, Linux, ...).
For vendors, the former is obviously a no-go. The latter has the issue of be throttled by Python, so you have to effectively create a shim that communicates with an external library or application that actually performs compute intensive tasks.
Most (if not all) industry DCCs provide a dedicated C++ SDK with Python bindings available if desired.
1. Extend Blender itself. This will net you the maximum performance, but you essentially need to maintain your own custom fork of Blender. Generally not recommended outside of large pipeline environments with dedicated support engineers.
2. Native Python addon. This is what 99% of addons are, just accessing scene data via Blender's Python interface. Drawbacks mentioned above, though there are some helper utilities to batch process information to regain some performance.
3. Hybrid Python Addon. You use the Python API as a glue layer to pass information between Blender and a natively compiled library via Python's C Extension API. With the exception of extracting scene data info, this will give you back the compute performance and host resource scalability you'd get from building on Blender directly. Being able to escape the GIL opens a lot of doors for parallel computation.
Super small nit (or info tidbit), but it doesn't take away from your overall message regarding production and scene scale.
Pixar does not and has not used Maya as the primary studio application, it's really only used for asset modeling and some minor shading tasks like UV generation and some Ptex painting. The actual studio app is Presto, which is an in-house tool Pixar has developed over the years since its earliest productions. All other DCCs are team/task specific.
Dreamworks is similar with their tool, Presto, IIRC. Walt Disney Animation Studio (WDAS) does use Maya as the core app last I saw, but I don't know if they've made any headway with evaluating Presto since 2019...
> Presto, which is an in-house tool Pixar has developed
> Dreamworks is similar with their tool, Presto, IIRC.
I guessed that's a typo though and yeah, the Dreamworks animation tool is called Premo.
Bah, I thought I proofread that! Yes, super late night typos strike again...
IMHO that's only still true because large studios can't afford to move their entire highly customized production pipeline which they had built around Maya for nearly three decades to any other tool (Blender or not), even if they desparately want a divorce from Autodesk. Autodesk basically has them locked in and can milk them until all eternity or the studios go bankrupt.
I bet that the next generation of CGI and game studios will be built around Blender (and not based on the quality of those tools, but because of Autodesk business practices).
(edit: somehow my brain switched Adobe and Autodesk - forgiveable though because both use the same 'milking existing customers' strategy heh)
really? I haven't done 3D rendering in a long time, admittedly, but back then Maya and Lightwave were miles ahead of Blender. Rhino3D too. Even 3DSMax was better. Lightwave seems to have sadly fallen off (unfortunately, IMO it was the best at one point, had excellent ray tracing). I didn't really Blender had come such a long way -- that's great.
Since you mention niches: Adobe InDesign has no OSS competition at all, and Illustrator is still much better than Inkscape.
Davinci Resolve is probably competitive with Premiere, but while free it's not actually open source. But either a viable competitor catching up or Davinci publishing the code could change that fast
I don't have the ability to compare these things in intimate detail, but Lightworks has at least been used for "real" productions [2] so I think it's production-ready.
I still root for them though. More NLE’s is good for the editing world as a whole and who knows, BMD could heel turn on us and ruin resolve. I’ve gone through 3 different NLE’s since 2011 (FCPX->briefly Premiere->Resolve) so I definitely don’t plan for more than 3-5 years ahead lol
I haven’t done anything with Resolve so I don’t have much to compare to.
Tbf, everything starts somewhere and all the proprietary apps you listed were not instant market leaders.
I can and do use all those FOSS tools just fine both as a hobbyist and professionally, my needs are meet. Others may not find the same, but I suspect there's just a lot of stickyness preventing even trying new workflows.
There is just something about it that does not click with me. Just selecting a foreground object even when the background was almost white never worked for me. Just so fiddly.
I can get stuff done with it.
I wouldn't say that, there's bugs in Freecad that drive me bonkers but I would be dishonest if I said it hadn't gotten more stable and better supports my Nvidia card today compared to previous releases
Mine aren't: GIMP is okay, FreeCAD is a complete joke. It is painfully obvious that their development is done primarily by F/LOSS enthusiasts rather than by industry professionals and UX designers. They are closer to being a random collection of features than a professional workhorse. You might eventually get the job done, but compared to the proprietary competition it is woefully incomplete, overly complicated, and significantly buggier.
The poor quality of FreeCAD is the main reason my 3D printer is collecting dust. As a Linux-only user the proprietary alternatives mostly aren't available to me, and FreeCAD is bad enough that I'd rather not do CAD at all. The Ondsel fork was looking promising for a while, but sadly that died off.
Edit: oh, it was even worse before? I hope they keep going in this direction then
I've only ever used openSCAD, Freecad, and on shape.
And for me, Freecad had increased my use of my 3d printer, originally built in 2016.
Cad has a specific workflow, which is true regardless of the tool. Sticking to the right order of operations goes a long way to having a positive XP as does some basic intro tutorials.
It's not a tool you can bumble around and figure out easily, even with XP in similar tools.
I only do occasional design for some things to print, and I'm always happy to come back to my OpenSCAD text files that I can actually read and understand within minutes, rather than having to try to remember the correct click path through some giant graphical CAD software.
It is definitely not a joke. It enabled me to have a very fulfilling hobby.
[1] Examples: a Steam Deck Skadis Dock, all kind of adapters for easy connection of wood parts, magnetic modular mini shelves for everything in the bathroom, a replacement for a broken door handle, a delicate plissé mounting point that is no longer produced, accessories for the microscopes for school …
If you want to limit standard Office productivity to ones that were written with the GUI in mind, MS Office was the leader on the Mac before it came to PCs and crushed WordPerfect and Lotus early on.
Hadn’t heard that. How many AAA vfx studios have left Maya for Blender?
KiCad, for PCB design. They have been making massive improvements over the last few years, and with proprietary solutions shutting down (Eagle) or being unaffordable (Altium) Kicad is now by far the best option for both hobbyists and small companies.
With the release of KiCad 5 in 2018 it went from being "a pain to use to, but technically sufficient" to being a genuine option for less-demanding professionals. Since then they've been absolutely killing it, with major releases happening once a year and bringing enough quality-of-life improvements that it is actually hard to keep track of all of them.
From the type of new features it is very obvious that a lot of professional users are now showing interest in the application, and as we've seen with Blender a trickle of professional adoption can quickly turn into a flood which takes over the entire market.
KiCad still has a long way to go when it comes to complex high-speed boards (nobody in their right mind would use it to design an EPYC motherboard, for example), but it is absolutely going to steamroll the competition when it comes to the cookie-cutter 2/4/6 layer PCBs in all the everyday consumer products.
The thing I miss is being able to rotate a IC by 45 degrees.
[1] https://sschueller.github.io/posts/ci-cd-with-kicad-and-gitl...
Guess what, user adoption increased dramatically, because it became pleasant (or tolerable) to use by people that used literally any other program.
V8 included in the core many things that were plugins before, and replaced the old utilities that the neckbeards in the forum were crying to keep, or else! (or else more adoption.)
V9 had even more many improvements, but also many regressions, over V8. V10 might be the release that truly consolidates the core of the suite and then they can start really focus on high speed designs.
I've navigated many programs over my career, and unless a future employer mandates me to use Altium, or purely technical reasons (8+ layer, high speed designs) requires me to use Cadence, only kicad for me.
Incidentally, it feels like this past two years freeCAD GIMP and Inkscape have started moving away from listening to noisy members of the community, to useful members of the community. I'm seeing a slow but steady progress that will eventually accelerate and make both toold true alternatives, as it happened with KiCAD (though it will really be tough for GIMP, even if it's perfectly usable for many, many tasks, any graphic designer will kick and scream if they're not given the adobe suite, pity.)
Myself, i do very little basic graphics like replicate buttons and such things to not bother my colleague, or apply correction to my photos, i proudly do that in GIMP and inkscape.
GIMP is just bad sadly even when it comes to basics, it has nothing to do with wanting Adobe products but more about GIMP just being a "coders-tool".
Every time in the last decade I checked, i still had to input a resolution when creating a new image layer.. that's a fundamental operation that hasn't been that clumsy in Photoshop since the 90s (Photoshop has "infinite" layers, they can be larger than the image, yes it's "bloated" but that's what you want as an artist 90% of the time.. not an annoying border).
True, but i also remember vehement discussions on everybody else in the world that wanted the scroll / zoom to work as in every other software in the world, and a few vocal users that would spam every thread and discussion insisting that we (the rest of the world) should have been using other software instead.
You know, the usual hostile attitude open source communities are famous for. I guess that for GIMP the moment that will change everything is when they will add a proper circle tool /s
GIMP and Inkscape are already moving in the right direction with the new UIs, fingers crossed
Blender went from a shitshow way worse than GIMP to almost killing the competition. Those working on GIMP took notice (and perhaps those that had felt sidelined before dared speak up).
It is very kludgy and cumbersome to split project into several PCB (for example, stack of PCBs connected by backplane or headers, like Arduino & Shield for it) and/or to have variations of the PCBs for one schematics, like TH and SMD variants of the PCB for exactly same schematics.
Even in my very modest almost-electrical (as opposed to electronic) projects I need one or another from time to time.
As far as I understand it is limitation which is not easy to fix, because all architecture of KiCAD is based on this 1-1-1 principle.
I think FreeCAD might be on a distant hilltop in their rearview these days, check it out again.
The most important improvement is the toponaming heuristic solver spearheaded by Realthunder.
Since that was merged into mainline, it seems that the devs keep breaking the UX and shortcuts without rythme nor reason, while the fundamentals are broken beyond repair.
I would never recommend freecad to anybody, even though this this the only CAD I use, and I actually write python for it for some automation.
I cannot live without freecad. But damn it's a mess.
which has been discussed here in the past:
https://news.ycombinator.com/item?id=37979758
https://news.ycombinator.com/item?id=40228068
https://news.ycombinator.com/item?id=41975958
which if it just had parameters/scripting would have a lot more potential.
It's a shame, because it looks really nice. Maybe I'll check it out for the next thing I do where that's not a requirement. Might be a shame since I've finally learnt how to (basically) use freecad now!
I've also extended the functionality with python, and have heavily customized the theme and shortcuts to fit my personal taste.
I not only tolerate the software, but enjoy using it, and am quite proficient at it.
I would recommend FreeCAD to others, but with some caveats. The most important being that they need to be willing to tolerate a few hours of introductory material, and second that they are serious about using the software long-term.
Otherwise, I'd probably just recommend Onshape. But, for many others, FC is fully viable.
Unity and Unreal are dinosaurs that target the shrinking console market. Godot is being built in their image. My hope is that something more versatile like Bevy becomes common so that we have something that could potentially compete with the next generation of Roblox.
I can think of a few more: Git obsoleted an entire category of commercial software seemingly almost overnight, VSCode has become by far the most widely used IDE (not entirely open source, though), TeX still dominates mathematical typesetting AFAIK (as it has for as long as computers have been used for that), (lib)ffmpeg is used everywhere for video/audio transcoding and between them nginx and apache still account for the majority of webservers. Most popular programming language compilers/interpreters/runtimes are open source too, of course.
So various projects have come up ever since to try to patch the UX.
Performance.
Since the CVS days, Version Control Systems got slower and slower. Centralized servers worsened the problem by doing everything in a synchronous manner, making people wait for locks, wait for checkouts, wait for diffs, wait for everything. But even the distributed ones like bzr or hg were slow as hell. What git enabled was the far superior ergonomics of not having to get a coffee while running a "git diff" on a project the size of the Linux kernel, X.org or LibreOffice. All the whining about inconsistent command line options and weird subcommand naming is totally secondary to this.
Nowadays, many of the aforementioned competitors either died out or worked on their performance problems. Nonetheless, git is still unmatched in this regard.
Even the qualifier "end user" application doesn't seem obvious to me. Is Photoshop an end user application or a tool for artists?
In the context of understanding where and why OSS is dominant, I think the tool/app distinction is whether the thing solves a software problem or whether it solves a business problem.
Through that lens, Photoshop is an application, while VSCode is a tool.
Why is not Photoshop considered a tool? You use it to [...], as a tool. But then I do not see the difference between application, like an application can be a tool, can it not? It makes my head spin and English is not my native language.
But at the same time Visual Studio is in the same category of software as VS Code, Oracle DBMS in the same category as postgresql, Perforce P4 in the same category as git. Surely that can't be it?
I'd agree in a heartbeat that developers are better at solving problems for developers. The less disconnect there is between developer and customer the better development goes, and the disconnect doesn't get lower than building developer tooling. But those things aren't any less apps or more tools than the things artists or technical writers or accountants use
Hmm, what commercial software would that be? Visual SourceSafe (lol)? ;)
Git mostly replaced SVN, and that is free and open too. But in scenarios where specialized version control software (like Perforce) is needed, git (and git lfs) breaks down too, and quicker than SVN does - e.g. for versioning large binary files git has actually been a massive step backwards and without a real solution showing up in 2 decades.
- AccuRev SCM (2002)
- Azure DevOps Server (via TFVC) (2005)
- ClearCase (1992)
- CMVC (1994)
- Dimensions CM (1980s)
- DSEE (1984)
- Integrity (2001)
- Perforce Helix (1995)
- SCLM (1980s?)
- Software Change Manager (1970s)
- StarTeam (1995)
- Surround SCM (2002)
- Synergy (1990)
- Vault (2003)
- Visual SourceSafe (1994)
A lot of companies was also building their own internal version control software. Either from scratch or as a bunch of loose scripts on top of other existing solutions. Often turning version control, package management and build scripts into one single complex messy solution. I worked for a company early in my career that had at least 4 different version control systems in use across different projects and even more build systems (all a mix of commercial software and home grown solutions on top).
These days almost everyone uses Git. Some companies uses Mercurial or SVN.
One commercial actor that is still around is Perforce, which is still popular in the game industry. Since managing large game assets isn't optimal for Git (but is possible with Git LFS or Git annex, or similar solutions).
If anything, OnlyOffice is incredibly polished. I don't know how they did it but it seems to work better and have a much cleaner UI than LibreOffice even though it's much newer.
DaVinci Resolve is getting better but still not comparable to After Effects.
Blender already has a lot of pieces in place to tackle this.
May I ask your opinion, as industry insider, regarding what makes a good and bad OpenUSD support?
Not true of business grads - who btw are the ones who have the purchase power for enterprise wide M$ Office when they join orgs. STEM grads need to kneel and obey their purchasing decisions.
Of course Blender is a marvelous piece of software, no doubt in that.
You don't see the same success with Gimp, Krita,...
Remember that Blender was opensourced was because it failed commercially. Its commercial background is completely irrelevant to its success - which took years to achieve and most 3D artists for more than a decade saw it as a toy (undeservedly IMO because even by 2.44-or-so it became very capable, but the UI despite its massive improvements was still very undiscoverable and alien for most users).
You can download "Blender Publisher 2.25" from blender's own site which was the last commercial release to check it..
Is this even true? My understanding is that 3D Studio and Maya used to be ahead.
I suspect in general "big companies" and the ecosystems they operate in are the main reason adoption (and subsequent investment and development) hasn't happened. The Microsoft Office suite is a good example, since many companies likely run the full Office + Teams + Outlook stack. It all "seamlessly" (not really lol) works together, and it's attractive to sell corporate solutions like that.
GIMP 2.x was IMO better. Also much easier to compile and get running. Fewer things to worry about.
I think you paint a rather selective picture here though. Quite a lot open source software is really really bad.
gcc? It’s hard to imagine any of the projects mentioned without a good compiler.
GCC and now Clang has beaten Intel compiler (and others). Nginx has beaten/replaced commercial web servers. Stockfish has beaten commercial chess engines and Lichess is much better than commercial chess sites.
Just 3 examples that came to my mind reading your comment.
On aesthetics, it definitely beats out Lightroom Classic, which is a closed-source paid tool (sidenote: I really miss Aperture).
I'm curious, what are the main points where it's lacking? I hear stability is one of them, but what about features?
It’s powerful and pleasant to use. Even the release marketing page is beautiful and well-made.
I like open source as much as the next guy, but outside of developer tools there is little that comes close to Blender in terms of utility and UX.
Is it funding? Specific individuals? Are there PMs and designers? Whatever it is, it’s working!
The creator apparently was selling it as freemium software in 1998, and then the bubble burst and the corp shutdown in 2002. But the creator created a non-profit called the Blender Foundation, launched a Free Blender campaign [2] (the forum post is still up!) to raise money from its users and bought out the rights to the software from the investors.
[1] https://www.blender.org/about/history/
[2] https://blenderartists.org/t/free-blender-campaign-launched/...
Relying on individual donations from users helps a lot with blender being aligned to the interest of its actual users. There is not one or a few corporate sponsors controlling everything.
Plus the GPL license which protects the freedom of its users.
GIMP vs Krita is a similar story. GIMP never will be the Photoshop replacement they aimed to be
NO other (yes I’d die on the hill) open source software has good UX and it’s horrible for adoption by the larger public.
Blender nodes have come a long way over the past decade and it's incredibly satisfying to see the care with which they have been developed. Blender's node editor is my personal favorite node editor I've ever used in any software, and I often find myself wishing other software adopted some of their UI and UX conventions.
Been a happy user since, oh, v2.75? And looking forward to being a user for many more releases to come.
Donate to Blender! [0]
These days I wouldn't want to bother doing manual animations or mesh creation. The computer must do this for us.
Learning the shortcuts is really the way to go though. Initially painful but 100x more productivity in the long run, and really the way Blender is meant to be used. Honestly just doing a single basic tutorial will have you learning all of them in no time.
They actually announced they were dropping support in February. Nobody noticed or cared.
https://passivestar.xyz/posts/instance-scattering-in-blender...
lwde•2mo ago
blitzar•2mo ago
Uehreka•2mo ago
adgjlsfhk1•2mo ago
1220512064•2mo ago
Now I want to look into it more, but I'd imagine that "Blackbody" and sky generation nodes might still assume a linear sRGB working space.
Uehreka•2mo ago
Since people are always asking for “real world examples”, I have to point out this is a great place to use an agent like Claude Code or Codex. Clone the source, have your coding assistant run its /init routine to survey the codebase and get a lay of the land, then turn “thinking” to max and ask it “Do the Blackbody attribute for volumes and the sky generation nodes still expect to be working in linear sRGB? Or do they take advantage of the new ACES 2.0 support? Analyze the codebase, give examples and cite lines of code to support your conclusions.”
The best part: I’m probably wrong to assert that linear sRGB and ACES 2.0 are some sort of binary, but that’s exactly the kind of knowledge a good coding agent will have, and it will likely fold an explanation of the proper mental model into its response.
throwaway290•2mo ago
1220512064•2mo ago
If you make a color space for a display, the intent is that you can (eventually) get a display which can display all those colors. However, given the shape of the human color gamut, you can't choose three color primaries which form a triangle which precisely contain the human color gamut. With a display color space, you want to pick primaries which live inside the gamut; else you'd be wasting your display on colors that people can't see. For a working space, you want to pick primaries which contain the entire human color gamut, including some colors people can't see (since it can be helpful when rendering to avoid clipping).
Beyond that, ACES isn't just one color space; it's several. ACEScg, for example, uses a linear transfer function, and is useful for rendering applications. A colorist would likely transform ACEScg colors into ACEScc (or something of that ilk) so that the response curves of their coloring tools are closer to what they're used it (i.e. they have a logarithmic response similar to old-fashioned analogue telecine machines).
throwaway290•2mo ago
or you are saying if there is some intermediate transform that makes color go beyond P3 it will get clipped? then I understand...
adgjlsfhk1•2mo ago
1220512064•2mo ago
Exactly! The conversion between ACES (or any working color space) and the display color space benefits from manual tweaking to preserve artistic intent.
Uehreka•2mo ago
throwaway290•2mo ago
kelsolaar•2mo ago
edflsafoiewq•2mo ago
kevin_thibedeau•2mo ago
selbyk•2mo ago
moron4hire•2mo ago
miyuru•2mo ago
Have we gone backwards to the point where we can’t even serve a static page now?