It’s just nice to be able to draw on the go or not be tied to a Cintiq.
You can connect the iPad to a larger screen and use it like a Cintiq though.
It’s definitely not fully featured enough for more complex art but the appeal is more than just the accessibility.
Almost all my professional concept artist friends have switched over, but I agree it’s not a great fit for matte paintings.
https://news.ycombinator.com/item?id=44622374 (2025-07-20; 62 comments)
Probably a useless submission but the discussion linked to the real thing at https://github.com/ahujasid/blender-mcp
Getting fully featured Blender on the iPad will be amazing for things like Grease Pencil (drawing in 3D) or texture painting, but on the other hand my side-project just became a little bit less relevant.
I'll have to take a look at whether I can make some contributions to Blender.
[0] https://apps.apple.com/nl/app/shapereality-3d-modeling/id674...
There is room for more than one modeling app on iOS as long as you can offer something that Blender doesn’t, even if it’s just better performance.
With implants, we are probably decades away.
What currently works best is monitoring the motor cortex with implants as those signals are relatively simple to decode (and from what I recall we start to be able to get pretty fine control). Anything tied to higher level thought is far away.
As for thought itself, I wonder how would we go about it (assuming we manage to decode it). It’s akin to making a voice controller interface, except you have to tell aloud everything you are thinking.
Ever since that paper came out, I (someone who works in ML but have no neuroimaging expertise) have been really excited for the future of noninvasive BCU.
Would also be curious to know if you have any thoughts on the several start-ups working in parallel on optimally pumped magnetometers for portable MEG helmets.
In theory, if Blender exposed their UI to the Apple accessibility system , it would let you use things via BCI.
Asus has been releasing year after year two performance beast, with a very portable form factor, multi-touch support and top of the line mobile CPUs and GPUs: the X13 and Z13
https://rog.asus.com/laptops/rog-flow-series/?items=20392
Considering the Surface Pro line also gets the newest Rizen AI chips with up to 32Gb of RAM, having them as second class citizen is kinda sad.
PS: blender already runs fine as-is on these machines. But getting a new touch paradigm would be extremely nice, and would be a better test best than a new platform IMHO.
It makes navigating 3D spaces so much easier with keyboard and mouse.
Basically, Blender says "start with a cube". I want to ask why and what are the other options.
You can create whatever start up file you want.
Other approaches include subsurface workflows, metaballs, signed distance functions, procedural (geometry nodes) and grease pencil.
As far as workflows go, there are far too many to list, and most artists use multiple, but common ones are:
* Sculpt from a base mesh, increasing resolution as you go with remeshing, subdivision, dyntopo, etc. * Constructively model with primitives and boolean modifiers. * Constructively model with metaballs. * Do everything via extrusion and joining, as well as other basic face and edge-oriented operators. * Use geometry nodes and/or shaders to programmatically end up with the result you want.
On the other end of the spectrum you have "sculping" for organic shapes and you might want dig into the community around ZBrush, if you want to fill the gap from "start with a cube" to real sculpting.
More niche, but where I feel home, is the model with coordinates and text input approach. I don't know if it has a real name, but here istwhere is pops up and where I worked with it:
- In original AutoCAD (think 90s) a lot of the heavy lifting was done on an integrated command line hacking in many coordinates
- POVRay (and its predecessors and relatives) has a really nice DSL to do CSG with algebraic hypersurfaces. Sounds way more scary than it is. Mark Shuttleworth used it on his space trip to make a render for example.
- Professionally I worked for a couple of years with a tool called FEMAP. I used a lot of coordinate entry and formulas to define stuff and FEMAP is well suited for that.
- Finally OpenSCAD for a contemporary example of the approach.
Other popular approaches to designing shapes are using live addition/subtraction, Nomad supports that as well, but it's not as intentional as say 3D Studio Max (for Windows) - where animating those interactions is purposely included as their point of difference.
There's also solid geometry modelling, this is intended for items that would be manufactured, for that SolidWorks is common.
Then finally you have software like Maya which lets you take all manner of approaches, including a few I haven't listed here (such as scripted or procedural modelling.) The disadvantage here is that the learning curve is a mountain.
And if you like a bit of history, PovRay is still around where you feed it text mark-up to produce a render.
- sculpting. You start with a high density mesh and drag the points like clay.
- poly modeling. You start with a plane and extend the edges until you make the topology you want.
- box modelling. You start with a cube or other primitive shape and extend the faces until you make the shape you want.
- nurbs / patches : you create parts of a model by moving Bézier curves around and creating a surface between them
- B-reps / parametric / csg / cad / sdf / BSP : these are all similar in that you start with mathematically defined shapes and then combine or subtract them from each other to get what you want. They’re all different in implementation though
- photogrammetry / scans : you take imagery from multiple angles and correlate points between them to create a mesh
Having a UI/UX for tablets is awesome, especially for sculpting (zBrush launched their iPad version last year - but since Maxon bought it, all is subscription only).
I joined the Blender development fund last year, they do pretty awesome stuff.
Almost all VR devices require lots of motion, have limited interaction affordances and have poor screens.
So you’re going to be more tired with a worse experience and will be working slower.
What XR is good for in the standard creative workflow is review at scale.
Blender does have an OpenXR plugin already but usability is hit or miss.
I really want to be able to pencil write on the coding apps on the go, now that handwritting reckognition has gotten good enough, but so far most of them provide a very lame experience.
It is like people don't really think out of the box on how to take advantage of new mediums.
Same applies to all those AI chat boxes, I don't want to type even more, I want to plain talk with my computer, or have AI based workflows integrated into tooling that feel like natural interactions.
I guess my hope now is that rather than select all the individual tools I just want AI to help me. At one level it might be it trying to recognize jestures as shapes. You draw a near circle, it makes it a perfect circle. Lots of apps have tried this. unfortunately they are so finicky and then you still need options to turn it off when you actually don't want a perfect circle. Add more features like that and you're back to an impenetrable app. That said, maybe if I could speak it. I draw a circle like sketch and say "make it a circle"?
But it's not just that, trying to model almost anything just takes soooooooooo long. You've got learn face selection, vertex selection, edge selection, extrusion options, chamfer options, bevel options, mirroring options, and on and on, and that's just the geometry for geometric based things (furniture, vehicles, buildings, appliances). Then you have to setup UVs, etc.....
And it gets worse for characters and organic things.
The process hasn't changed significantly since the mid-90s AFAICT. I leared 3ds in 95. I learned Maya in 2000. They're still basically the same apps 25-30 years later. And Blender fits right in in being just as giant and complex. Certain things have changed, sculpting like z-brush. Node geometry like Houdini. And lots of generators for buildings, furniture, plants, trees. But the basics are still the same, still tedious, still need 1000s of options.
It's begging for disruption to something easier.
WorldPeas•3d ago
dagmx•3d ago
Are you looking to use Blender on a small touch screen backed by desktop Linux?
WorldPeas•3d ago
dagmx•3d ago
WorldPeas•2d ago
WillAdams•4h ago
https://www.reddit.com/r/wacom/comments/16215v6/wacom_one_ge...
Looking into using the new gen 2 w/ touch on an rPi 5.
bigyabai•3h ago
Makes me wonder if anyone's playing osu! on their Steam Decks...