It’s just nice to be able to draw on the go or not be tied to a Cintiq.
You can connect the iPad to a larger screen and use it like a Cintiq though.
It’s definitely not fully featured enough for more complex art but the appeal is more than just the accessibility.
Almost all my professional concept artist friends have switched over, but I agree it’s not a great fit for matte paintings.
https://news.ycombinator.com/item?id=44622374 (2025-07-20; 62 comments)
Probably a useless submission but the discussion linked to the real thing at https://github.com/ahujasid/blender-mcp
(there even is https://www.printables.com/model/908684-spacemouse-mini-slim... - which I know works in freecad)
Getting fully featured Blender on the iPad will be amazing for things like Grease Pencil (drawing in 3D) or texture painting, but on the other hand my side-project just became a little bit less relevant.
I'll have to take a look at whether I can make some contributions to Blender.
[0] https://apps.apple.com/nl/app/shapereality-3d-modeling/id674...
There is room for more than one modeling app on iOS as long as you can offer something that Blender doesn’t, even if it’s just better performance.
"UberPaint: Layer-based Material Painting for Blender (PUBLIC BETA)" https://theworkshopwarrior.gumroad.com/l/uberpaint
I doubt that anything other than native blender functionality could serve all this with any elegance.
I teach substance painter and it does a good job and hiding all this complexity until you need it. It is very common for students to underestimate it as an app… to view it as just photoshop for 3D.
Blender has a ton of competitors. They're all commercial and have corporate backing. If anything, blender is the "little guy". It is utterly amazing what Ton has managed to do with Blender.
Eh ... blender is open source.
I think I'm going to focus more on CAD / architectural use cases, instead of attempting feature parity with Blender's main selling points (rendering, hard-surface modeling, sculpting).
I'm definitely aiming to build a more focused app compared Blender, as I want to focus explicitly on modeling, e.g. BRep or NURBs.
What kind of apps have you worked on?
Eg, more like a sketch than asset production, and metaballs-focused which obviously is going to create a very different topology than NURBS, etc.
My first app was Polychord for iPad which is a music making app that came out in 2010.
These days Vibescape for visionOS is a big one for me, and there are others. I also worked in the corporate world for about a decade working on apps at Google, etc.
I tried uMake a while back, but found the 3D viewport navigation a bit hard to use, and would often find out I had been drawing on the wrong plane after orbiting the camera.
After using something like Tilt Brush in VR, it's hard to go back to a 2D screen that doesn't instantly communicate the 3D location of the brush strokes you're placing.
Blender has a pretty big learning curve. Since your app has a much narrower focus, you can still make something a lot of people will use.
With implants, we are probably decades away.
What currently works best is monitoring the motor cortex with implants as those signals are relatively simple to decode (and from what I recall we start to be able to get pretty fine control). Anything tied to higher level thought is far away.
As for thought itself, I wonder how would we go about it (assuming we manage to decode it). It’s akin to making a voice controller interface, except you have to tell aloud everything you are thinking.
Ever since that paper came out, I (someone who works in ML but have no neuroimaging expertise) have been really excited for the future of noninvasive BCU.
Would also be curious to know if you have any thoughts on the several start-ups working in parallel on optimally pumped magnetometers for portable MEG helmets.
Not really. I left the field mostly because I felt bitter. I find that most papers in the field are more engineering than research. I skimmed through the MindEye paper and don’t find it very interesting. It’s more of mapping of “people looking at images in a fMRI” to identifying the shown image. They make the leap of saying that this is usable to detect actual mind’s eye (they cite a paper where they requires 40 hours of per-subject training, on the specific dataset) which I quite doubt. Also we’re nowhere near having a portable fMRI.
As for portable MEG, assuming they can do it: it would be indeed interesting. Since it still relies on synchronized regions I don’t think high level thinking detection is possible but it could be better for detecting motor activity and some mental states.
In theory, if Blender exposed their UI to the Apple accessibility system , it would let you use things via BCI.
Asus has been releasing year after year two performance beast, with a very portable form factor, multi-touch support and top of the line mobile CPUs and GPUs: the X13 and Z13
https://rog.asus.com/laptops/rog-flow-series/?items=20392
Considering the Surface Pro line also gets the newest Rizen AI chips with up to 32Gb of RAM, having them as second class citizen is kinda sad.
PS: blender already runs fine as-is on these machines. But getting a new touch paradigm would be extremely nice, and would be a better test best than a new platform IMHO.
I got a Lenovo Yoga precisely because it has a good drawing experience and the Lunar Lake Intel SoCs that have GPU acceleration in Blender.
And I fumbled the CPU attribution for the Surface devices, they are of course Intel processors (Intel Core Ultra Series 2) for business. The customer line stays on Snapdragon.
It makes navigating 3D spaces so much easier with keyboard and mouse.
Basically, Blender says "start with a cube". I want to ask why and what are the other options.
You can create whatever start up file you want.
Other approaches include subsurface workflows, metaballs, signed distance functions, procedural (geometry nodes) and grease pencil.
As far as workflows go, there are far too many to list, and most artists use multiple, but common ones are:
* Sculpt from a base mesh, increasing resolution as you go with remeshing, subdivision, dyntopo, etc. * Constructively model with primitives and boolean modifiers. * Constructively model with metaballs. * Do everything via extrusion and joining, as well as other basic face and edge-oriented operators. * Use geometry nodes and/or shaders to programmatically end up with the result you want.
On the other end of the spectrum you have "sculping" for organic shapes and you might want dig into the community around ZBrush, if you want to fill the gap from "start with a cube" to real sculpting.
More niche, but where I feel home, is the model with coordinates and text input approach. I don't know if it has a real name, but here istwhere is pops up and where I worked with it:
- In original AutoCAD (think 90s) a lot of the heavy lifting was done on an integrated command line hacking in many coordinates
- POVRay (and its predecessors and relatives) has a really nice DSL to do CSG with algebraic hypersurfaces. Sounds way more scary than it is. Mark Shuttleworth used it on his space trip to make a render for example.
- Professionally I worked for a couple of years with a tool called FEMAP. I used a lot of coordinate entry and formulas to define stuff and FEMAP is well suited for that.
- Finally OpenSCAD for a contemporary example of the approach.
Other popular approaches to designing shapes are using live addition/subtraction, Nomad supports that as well, but it's not as intentional as say 3D Studio Max (for Windows) - where animating those interactions is purposely included as their point of difference.
There's also solid geometry modelling, this is intended for items that would be manufactured, for that SolidWorks is common.
Then finally you have software like Maya which lets you take all manner of approaches, including a few I haven't listed here (such as scripted or procedural modelling.) The disadvantage here is that the learning curve is a mountain.
And if you like a bit of history, PovRay is still around where you feed it text mark-up to produce a render.
- sculpting. You start with a high density mesh and drag the points like clay.
- poly modeling. You start with a plane and extend the edges until you make the topology you want.
- box modelling. You start with a cube or other primitive shape and extend the faces until you make the shape you want.
- nurbs / patches : you create parts of a model by moving Bézier curves around and creating a surface between them
- B-reps / parametric / csg / cad / sdf / BSP : these are all similar in that you start with mathematically defined shapes and then combine or subtract them from each other to get what you want. They’re all different in implementation though
- photogrammetry / scans : you take imagery from multiple angles and correlate points between them to create a mesh
My main tool is OpenSCAD, as I am a programmer and I model for 3D printing, which support both CSG and extrusion, but not the mesh manipulation like Blender.
One paradigm is not better than the other. Blender is nice for modelling, for example, characters, but it's really painful for doing precise shapes. For modelling multi-part action figures for 3D printing you'll need both things, sometimes modelling something in a tool and finishing it in other
Having a UI/UX for tablets is awesome, especially for sculpting (zBrush launched their iPad version last year - but since Maxon bought it, all is subscription only).
I joined the Blender development fund last year, they do pretty awesome stuff.
Almost all VR devices require lots of motion, have limited interaction affordances and have poor screens.
So you’re going to be more tired with a worse experience and will be working slower.
What XR is good for in the standard creative workflow is review at scale.
Blender does have an OpenXR plugin already but usability is hit or miss.
This is part of the reason why high end 3d cursors have resistive feedback, especially since fine motor control is much easier when you have something to push against and don't have to support the full weight of your arm.
Freebird is still quite basic, but it's under active development and more modeling and posing features are being added.
So a core set of people definitely use VR in Blender for creation/editing, but I agree the number is quite small.
I really want to be able to pencil write on the coding apps on the go, now that handwritting reckognition has gotten good enough, but so far most of them provide a very lame experience.
It is like people don't really think out of the box on how to take advantage of new mediums.
Same applies to all those AI chat boxes, I don't want to type even more, I want to plain talk with my computer, or have AI based workflows integrated into tooling that feel like natural interactions.
I guess my hope now is that rather than select all the individual tools I just want AI to help me. At one level it might be it trying to recognize jestures as shapes. You draw a near circle, it makes it a perfect circle. Lots of apps have tried this. unfortunately they are so finicky and then you still need options to turn it off when you actually don't want a perfect circle. Add more features like that and you're back to an impenetrable app. That said, maybe if I could speak it. I draw a circle like sketch and say "make it a circle"?
But it's not just that, trying to model almost anything just takes soooooooooo long. You've got learn face selection, vertex selection, edge selection, extrusion options, chamfer options, bevel options, mirroring options, and on and on, and that's just the geometry for geometric based things (furniture, vehicles, buildings, appliances). Then you have to setup UVs, etc.....
And it gets worse for characters and organic things.
The process hasn't changed significantly since the mid-90s AFAICT. I leared 3ds in 95. I learned Maya in 2000. They're still basically the same apps 25-30 years later. And Blender fits right in in being just as giant and complex. Certain things have changed, sculpting like z-brush. Node geometry like Houdini. And lots of generators for buildings, furniture, plants, trees. But the basics are still the same, still tedious, still need 1000s of options.
It's begging for disruption to something easier.
For my part, I think this is because to do good creative work you need minute control, and minute control essentially just means adding lots of controls to software. Sure you can mediate this with AI, the learning curve especially, but that only really helps unskilled workers and hinders skilled workers (e.g., a slider is a more efficient way for a skilled user to get an exact exposure, rather than trying to communicate that with an AI). And I don't really think there's a need for software where unskilled users have a high-degree of control (i.e., skill and control are practically synonyms).
I just feel like there's some version of these tools that can be 100x simpler. Maybe it will take a holodeck to get there and AI reading my mind so that I knows what I want to do without me having to dig through 7 levels of menus, sub sections, etc....
There are many areas where custom hardware interfaces are popular used in conjunction with software:
- MIDI controllers (note both for the piano keys, and for the knobs/sliders, and even foot pedals)
- Jog wheels for NLEs
- Time-coded vinyl https://en.wikipedia.org/wiki/Vinyl_emulation
- WACOM Tablets
All of these custom hardware interfaces accomplish the same thing: They make using the software more tactile and less abstract. Meaning you replace abstract-symbol lookup (e.g., remembering a shortcut or menu item) with muscle memory (e.g., playing a chord on a piano).
So TLDR, the reason that we don't have what you're looking for is that we don't have a good way to simulate clay and wood as hardware that interfaces nicely with software.
Note there's a larger point here, which I think is more what you were getting at. I think people sometimes expect (and I expected this when I was younger), that computers could invent new better interfaces to tasks (e.g., freed from the confinements of physics). Now I think this is totally the opposite, that the interfaces are usually better from the physical world (which makes sense if you think about it, often what were talking about are things that human beings have refined over thousands of years), and that enforcing the laws of physics usually actually makes things easier (e.g., we've been dealing with them since the moment we were born, we have a lot of practice).
Finally also note that custom hardware interfaces only tend to help across one axes (e.g., a MIDI controller only helps enter notes/control data). The software still ends up being complex because people also want all the things computers are good at that real world materials aren't, like redo/undo, combining back together things that have been broken apart, zooming in/out, seeing the same thing from several perspectives at once, etc...
PS I don't even know if the Holodeck or mind-link up would really help here, it's possible, but it's also possible it just difficult for our brains to describe what we want in a lot of cases. E.g., take just adjusting the exposure, you can turn it down, oh but wait I lost the violet highlight that I liked, how can I light this scene and keep that highlight and make it look natural. I don't know maybe this stuff does map to Holodeck/mind-link, but it's also possible that just having tons of options for every light really is the best solution to that.
This assumes the complexity is incidental rather than inherent. I think the problem is similar to how Reality has a surprising amount of detail[1]. AI will likely eventually make a dent in this, but I think as an artist you tend to want a lot of granular control over the final result, and 3D modeling+texturing+animation+rendering (which is still only a subset of what Blender does) really does have a whole lot of details you can, and want to, control.
[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
And there are also add-ons around that: https://docs.blender.org/manual/en/latest/addons/3d_view/vr_...
:c
If they insist on doing it, it would be a good idea to rebrand it so expectations are in line with what a tablet interface enables/prevents.
It's like saying "People with their mouths full of food couldn't perform Shakespeare so we translated Hamlet to a maximum of 2 syllable words. Now everyone can perform Shakespeare."
https://docs.blender.org/manual/en/latest/advanced/app_templ...
So touchscreen support will be like a mode you can switch on. I don't think desktop users will be affected.
I think it’s really cool for Blender to be experimenting with UX given its incredible stature as an OSS project.
If Blender does this well it could change the landscape and culture of OSS.
Of course there are risks but fortune favors the bold.
As long as they are honest with themselves and the users about what it is capable of, great.
But it feels like some UX people got the steering wheel and are taking everyone on a joyride.
I'm not sure exactly what that means, intuitively I agree with your comment, but I guess the ZBrush port acts as a data point that's at least a partial counterargument (only partial because, e.g., Maxon hasn't ported Cinema 4D, which is more directly analogous to Blender).
I'm really curious about the ZBrush port, e.g., is that really because customers were asking for it (or usage data from Maxon's own iPad-only [acquired] app Forger indicated there was interest)? Or is it a defensive move trying prevent an Innovator's Dilemma-style disruption from the bottom from an app like Nomad Sculpt https://nomadsculpt.com?
If accessibility is a priority for Blender, then they should absolutely be trying this. This isn’t going to be taking away the keyboard/mouse control that currently exists. The point is to give people, who (for whatever reason) can’t use a keyboard and mouse, a tool that they don’t currently have access to. There is also a large segment of the younger user base whose primary interface to computing is a tablet. This has the potential to open a whole new market of users for Blender.
Give them a little credit… I don’t think Blender is going to “downgrade” their existing workflows. For this tablet/pen project, who knows what kind of UI/UX they will have - it could be great. Plus, it is important for a project like Blender to have the freedom to experiment, otherwise you end up with a static ecosystem.
But, honestly, why wouldn’t you want Blender to make 3D work available to others who prefer to work with a different set of input devices? If that tool ends up as a “Blender Lite”, who cares? It may not be useful to you, but it will be useful to someone else. And maybe they find a new feature that will be useful to you in the process.
But they gave the impression it would be Blender.
could be really powerful in conjunction with blender being supported on iPad (and possibly iphone).
WorldPeas•6mo ago
dagmx•6mo ago
Are you looking to use Blender on a small touch screen backed by desktop Linux?
WorldPeas•6mo ago
dagmx•6mo ago
WorldPeas•6mo ago
WillAdams•6mo ago
https://www.reddit.com/r/wacom/comments/16215v6/wacom_one_ge...
Looking into using the new gen 2 w/ touch on an rPi 5.
bigyabai•6mo ago
Makes me wonder if anyone's playing osu! on their Steam Decks...