frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Formal Security & Verification of Cryptographic Protocol Implementations in Rust

https://eprint.iacr.org/2025/980
1•matt_d•1m ago•0 comments

Drew Saur on the Commodore 64

https://theprogressivecio.com/the-commodore-64-made-a-difference/
1•Bogdanp•1m ago•0 comments

OpenAI Vulnerability: 48 Days, No Response

https://requilence.any.org/open-ai-vulnerability-responsible-disclosure
1•requilence•1m ago•1 comments

How bad are search results? Let's compare

https://danluu.com/seo-spam/
1•warrenm•2m ago•0 comments

Chatbot, the Evolution of Conversational Software

https://www.interlogica.it/en/insight-en/chatbot-history/
1•Bluestein•2m ago•0 comments

DeepMind AI staff tied to "aggressive" noncompete – Offering year-long PTO

https://www.windowscentral.com/software-apps/work-productivity/deepmind-noncompete-clause-rival-labs
1•Bluestein•3m ago•0 comments

Billionaires Convince Themselves Chatbots Close to Making Scientific Discoveries

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
3•maartenscholl•12m ago•1 comments

I Tried Grok's Built-In Anime Companion and It Called Me a Twat

https://www.wired.com/story/elon-musk-xai-ai-companion-ani/
2•coloneltcb•13m ago•0 comments

Mistral Releases Voxtral: Open Source Speech Understanding Models (3B and 24B)

https://huggingface.co/mistralai
2•yanng404•21m ago•0 comments

Behind the Streams: Three Years of Live at Netflix

https://netflixtechblog.com/behind-the-streams-live-at-netflix-part-1-d23f917c2f40?source=social.linkedin&_nonce=QaDyAeai
1•mfiguiere•26m ago•0 comments

Gauging Light Pollution: The Bortle Dark-Sky Scale

https://skyandtelescope.org/astronomy-resources/light-pollution-and-astronomy-the-bortle-dark-sky-scale/
1•dskhatri•29m ago•0 comments

Implantable device could save diabetes patients from dangerously low blood sugar

https://medicalxpress.com/news/2025-07-implantable-device-diabetes-patients-dangerously.html
1•PaulHoule•31m ago•0 comments

Americans' new tax rates depend on who they are and what they do

https://news.bloomberglaw.com/daily-tax-report/americans-new-tax-rates-depend-on-who-they-are-and-what-they-do
1•hhs•32m ago•0 comments

Ask HN: Relevant Java programming language in this new world of AI

1•rammy1234•32m ago•0 comments

I'm a Genocide Scholar. I Know It When I See It

https://www.nytimes.com/2025/07/15/opinion/israel-gaza-holocaust-genocide-palestinians.html
7•lyu07282•34m ago•4 comments

Nuxt v4

https://nuxt.com/blog/v4
1•2sf5•36m ago•0 comments

Ask HN: What is the best way to learn 3D modeling for 3D printing?

1•wand3r•37m ago•0 comments

Huawei's star AI model was built on burnout and plagiarism

https://the-open-source-ward.ghost.io/the-pangu-illusion-how-huaweis-star-ai-model-was-built-on-burnout-betrayal-and-open-source-theft/
18•avervaet•37m ago•8 comments

Steve Albini interview by Billy Hell (2005)

https://www.furious.com/perfect/shellac.html
1•rufus_foreman•37m ago•0 comments

Android Rewind

https://androidrewind.com/
1•GlitchRider47•43m ago•0 comments

Horus: A Protocol for Trustless Delegation Under Uncertainty

https://arxiv.org/abs/2507.00631
1•david_shi•47m ago•0 comments

Show HN: Vezeto – Android app for travelers powered by AI

1•dujma•48m ago•0 comments

US court gives Argentina three more days to surrender its YPF shares

https://english.elpais.com/economy-and-business/2025-07-15/us-court-gives-argentina-three-more-days-to-surrender-its-ypf-shares.html
2•geox•48m ago•0 comments

Yield curve for engineers has inverted

https://mvcalder-01701.medium.com/the-inverted-yield-curve-48d48959ba6a
3•mvcalder•53m ago•1 comments

Mathematician has solved the Kakeya conjecture

https://english.elpais.com/science-tech/2025-07-14/what-is-the-smallest-space-in-which-a-needle-can-be-rotated-to-point-in-the-opposite-direction-this-mathematician-has-finally-solved-the-kakeya-conjecture.html
2•belter•55m ago•0 comments

AI Isn't Responsible for Slop. We Are Doing It to Ourselves

https://www.techpolicy.press/ai-isnt-responsible-for-slop-we-are-doing-it-to-ourselves/
1•jomaric•57m ago•1 comments

Reversing Google's New VM-Based Integrity Protection: PairIP

https://blog.byterialab.com/reversing-googles-new-vm-based-integrity-protection-pairip/
1•zb3•1h ago•0 comments

Making an ASCII Animation

https://pierce.dev/notes/making-the-ghostty-animation/
1•icyfox•1h ago•0 comments

Show HN: TogetherMoon – Share the night sky with someone miles away

https://togethermoon.com/?go=2RG6Q
1•punyname•1h ago•0 comments

Do Indoor Pools Need to Close for Lightning?

https://undark.org/2025/07/15/indoor-pools-lightning/
1•EA-3167•1h ago•1 comments
Open in hackernews

Blender 4.5 LTS

https://www.blender.org/download/releases/4-5/
268•obdev•8h ago

Comments

obdev•8h ago
Blender 4.5 LTS Release Notes:

https://developer.blender.org/docs/release_notes/4.5/

marcodiego•7h ago
I don't know if it supports HDR on MacOS but, AFAIK, it doesn't on windows and in Linux it is only supported with Wayland.

Though I don't like the Wayland x X11 flamewar, I'm happy to see some modern features are only supported on Wayland.

That may please the crowd that will be able to say "sorry, I can't use x11 because it doesn't support a feature I need" bringing some symmetry to the discussion.

Edit: correction: it is about the development 5.0 version: https://devtalk.blender.org/t/vulkan-wayland-hdr-support/412...

Buxato•7h ago
I really wish to could use Wayland, but there is too much problems or bugs related to the software I use for work and also play. I will test it again with this new version Blender (that was one of that software with problems).
Joel_Mckay•7h ago
In general, most repositories use old stable versions of Blender. And often folks are reduced to using snap to maintain version specific Compatibility with add-ons etc.

Also, getting the most out of Intel+RTX CUDA card render machines sometimes means booting into windows for proper driver support. Sad but true... =3

Joel_Mckay•7h ago
The reality is most commercial software and users are on Windows machines. It is fundamentally a Blender interoperability, and 3rd party platform license compatibility issue. We all wish it wasn't so, as many artists find the Windows file systems and color-calibration concepts bewildering.

Making a feature platform specific to a negligible fraction of the users is inefficient, as many applications will never be ported to Linux platforms.

Blender should be looking at its add-on ecosystem data, and evaluate where users find value. It is a very versatile program, but probably should be focused on media and game workflow interfaces rather than gimmicks.

Best of luck =3

johnnyanmac•5h ago
I agree with you, but I think this limitation is for much simpler reasons, like "the contributor only knew how to make this feature in Linux, and only in Wayland". cross compatibility for stuff a base as color grading can be a thorny issue.

If nothing else, it's better to have some implementation to reference for future platforms than none.

Joel_Mckay•5h ago
We've all seen too many plugins become version specific or indeterminately broken

https://www.youtube.com/watch?v=WFZUB1eJb34

Someone needs to write a Blender color calibration solution next vacation =3

MrScruff•4h ago
The significant majority of the film and animation industry uses Linux.
Joel_Mckay•3h ago
Linux RTX CUDA drivers are getting better, but really depends on the use-case. For a Flamenco render farm it makes sense for sure.

Creatives on wacom tablets and Adobe products etc. will exclude the Linux Desktop option. =3

MrScruff•3h ago
Not just for the farm, the large majority of the movie and tv vfx and animation you see is done by artists using Linux workstations.
Joel_Mckay•2h ago
Not the artists I meet, they love their wacom tablets and pressure responsive painting programs... i.e. most of the other software is windows only.

I like Linux (use it everyday), but many CAD, Media, Animation, and Mocap application vendors simply don't publish ports outside Windows.

Most studios have proprietary workflows with custom software. =3

MrScruff•2h ago
Indeed but this is a discussion about Blender and you posted originally:

> Making a feature platform specific to a negligible fraction of the users is inefficient, as many applications will never be ported to Linux platforms.

All the large studios use Linux, that's why all the third party software that is used in feature animation and vfx is supported on Linux. So I'm just saying 'negligible fraction of users' in the case of Blender (which as a project would like to increase adoption in professional feature animation and vfx) isn't really true.

Joel_Mckay•1h ago
I am sure Studios account for a small portion of the 4.5 million unique downloads each release. Note that less the 20% of users ever touch film or animation projects, 73% are single users, and most related user applications are Adobe products.

Stats are available from the published 2024 feedback data:

https://survey.blender.org/feedback/2024/

Best of luck, =3

Recommended reading:

https://www.poetry.com/poem/101535/the-blind-men-and-the-ele...

doublerabbit•6h ago
> I'm happy to see some modern features are only supported on Wayland.

Why?

If anything that's a reason to why I wouldn't fully jump to blender.

I have been working on my own hobby game engine for the past 15 years and have been excited to introduce Blender to the workflow. If this is the case I don't like it. Wayland has never work for me the same way as X has.

OsrsNeedsf2P•6h ago
What else are you going to use on Linux?
doublerabbit•6h ago
Nuke, Maya?
johnnyanmac•6h ago
If they spent 15 years on the engine as is, what's another few more years rolling a proprietary modeling system?

On a serious note, I do wonder if this Wayland only limitation is something a fork could work around.

tapoxi•5h ago
I don't think there's an X11 HDR standard, one would need to be created and implemented.
yjftsjthsd-h•6h ago
If the starting point is that Wayland is missing features that X has, the good outcome is not getting to a point where neither option has all the features, the good outcome is that either one has all the features.
_bent•5h ago
It definitely does on macOS and I think also on Windows. You have to set the color management for the viewport to Display P3. In older versions this precluded you from using AGX or Filmic, but I think you can actually use AGX with Display P3 now.
JKCalhoun•7h ago
A wonderful app that, I find, benefits from LLMs to help you figure out how to use it. That or you more or less have to dive in for months and months. The people on YouTube that make Blender look so easy … Blender is really all they do. ;-)

At some point I see an LLM more or less integrated into the UI.

At some point I see whole apps written so complexly that an LLM is the required interface.

unshavedyak•6h ago
Huh, TIL https://github.com/ahujasid/blender-mcp - do you use this?
JKCalhoun•6h ago
Thanks for pointing that out.

I have not tried it. Instead I have been asking Claude (etc.) "How do I create a repeating triangular truss..." or what-have-you. And then I follow the steps they list.

pram•4h ago
I tried it and it's really not very good. It "works" but models can't seem to do anything complicated. Claude 4 Opus btw
weregiraffe•6h ago
Oh boy. Giving Blender instructions in English, on every level of detail, could be quite a workflow. Like, have it generate a range of chairs, pick the mesh you like, then start asking it to tweak parts of it until you are satisfied. Add finishing touches manually.
Joel_Mckay•6h ago
In general, model re-topology almost always requires a manual editing process post sculpting and texturing. There have been countless attempts to automate the process, but it usually requires working through an obscure book about rigging with secondary transforms etc.

Could check out https://extensions.blender.org/add-ons/mpfb/ for low-poly character mesh generation tools. =3

dragontamer•5h ago
Why is this being down voted?

If you are pushing vertices around to make a model, you eventually have to think of topology issues if you ever want to make a good animation.

And then when you have a good topology... Then You need to rework the UV Maps, Normals and other such data so that it's all consistent.

LLMs are not the right tool for this. I'm sure some AI retopology tool can come around but there is a HUGE difference between a model designed for stills, a model designed for animation and finally, a model designed for close up facial animations.

The topology differences alone are insane.

---------

Now maybe in the future an Ubermesh (like the Uber shader / principled shader) can be made to make these processes consistent. And then tooling can be made over the hypothetical Ubermesh. But today it's a lot of manual retopology that requires deep thinking about animation details.

Ex: an anime style mouth doesn't usually move the jawline (Think Sonics mouth: its basically a moving hole that jumps around his face). Meanwhile, western 3D animation (Baldurs Gate and the like) obviously have fully made lip sync including phhfff, eeee and ahhh sounds.

The 'Kind' of animation you want leads to different mesh topologies. It's just the nature of this work.

And people get ANGRY when you ex: animate Sonic with traditional teeth and jawline (see 'Ugly Sonic'). You can't just willy nilly pick one style randomly. You need purposeful decisions.

--------

I guess if you are a pure sculptor / stills then you don't have to worry about this. But many 3d projects are animated, if not most of them. The retopology steps are well known.

LowPoly look will almost certainly require manual effort (these topology problems tend to disappear with more triangles).

Joel_Mckay•5h ago
Blender is often used to project the high-granularity sculpted detail onto a low-poly (rigged and clean topo) Normal map. i.e. you can get great looking assets that won't hit your poly budget as hard. When it works its great...

People have feelings, and sometimes may misinterpret others intent. I am happy we've met if we both learned something. =3

dragontamer•3h ago
Oh yeah, I was agreeing with you. Just providing context for others if they didn't understand what was going on.
MisterTea•6h ago
> That or you more or less have to dive in for months and months.

Waaaaay back in the Quake 3 days I had a pirated copy of 3D Studio Max and decided to try and make a Quake (or maybe it was Half-Life) character model. Found an on-line tutorial that step by step showed you how to setup Max with a background image that you "traced" over. So I grabbed the images of front, back and side views of a human, loaded them into the various view ports, then drew a rectangle from which you extrude a head, arms and legs. Then you manipulate the block headed human mesh to fit the human background image - extruding more detail from the mesh as you go along. In one day I had a VERY crude model of a person. I also found out I dont like 3D modelling. Though I'd say a person who really enjoys it would pick it up in a week or two with good tutorials.

LLM's just cut out the learning part leaving you helpless to create anything beyond prompts. I am still very mixed on LLM's but due to greed and profit driven momentum, they will eat the world.

qiller•1h ago
One problem I find is that a lot educational content has moved into YouTube and videos (monetization be damned). I have no time to watch 10mins of rambling and ads for a quick tip, LLMs are great at distilling the info. Otherwise, I agree, deep knowledge building only happens through doing stuff…
robertoandred•7h ago
For all their talk about performance, that webpage is incredibly slow/stuttery.
wahnfrieden•7h ago
Luckily Blender is not a webpage
daef•3h ago
imagine blender as electron app... ^^
unshavedyak•7h ago
If you get value out of Blender they could use your support: https://fund.blender.org/

Even just a coffee a month will help immensely. Please consider supporting :)

agumonkey•6h ago
I could tip them even without using it, the value of such a successful creative project is a pleasure in itself.
tester457•5h ago
Always support open source when you can, even if you use proprietary software instead of Blender.

Supporting open source creates competition for your paid alternative, so they're forced to give you a better product or a better deal.

StableAlkyne•4h ago
Especially software like Blender.

It's easily one of the most well-made FOSS projects out there!

chickenzzzzu•25m ago
I maintain a Blender fork at my company and Blender quite literally is my paycheck, but I disagree vehemently that it is a well-made FOSS project lol.
lukan•8m ago
So .. can you give reasons?
modernerd•4h ago
You can also/alternatively subscribe to Blender Studio:

https://studio.blender.org/

You get access to training, assets, source and production logs like the recent Dog Walk update while also supporting Blender:

https://studio.blender.org/projects/dogwalk/production-logs/

lvl155•3h ago
This is the only project I support year in year out. What they did is amazing. Simply amazing.
raincole•6h ago
It does feel like Blender is eating the world of 3D content creation.*

I know the industry standard of animation is still Maya, and geometry node isn't as powerful as Houdini, but Blender has made such great progress so far... I wonder if there any hobbyist/student who learns 3Dsmax or 3DCoat as their first DCC anymore?

*: Assuming genAI won't deliver and make that world disappear entirely, ofc.

minimaxir•6h ago
Due to Blender being OSS and scriptable with Python (https://docs.blender.org/api/current/info_quickstart.html), it wouldn't surprise me if a future iteration of genAI/agents works with Blender directly.
homarp•6h ago
There is a Blender MCP server https://blender-mcp.com/
cultofmetatron•6h ago
blender is the python of 3d animation. Its the second best tool for everything. Which btw is not a digg. its really hard to be that level of reliable accross an entire workflow.
Joel_Mckay•6h ago
Maya hasn't changed much in a decade, and that is a wise choice for training/documentation. They focused on the content part which is its value proposition.

Blender has always had the perpetual Beta problem, as many boring core design issues were never really given priority. Fine for developers, but a liability in commercial settings.

Houdini is interesting, but with Blender Geometry-nodes now working it is unclear how another proprietary ecosystem will improve workflows. =3

The Entagma channel covers a lot of Houdini and Blender bleeding-edge features with short lab tutorials:

https://www.youtube.com/@Entagma/videos

raincole•6h ago
Geometry node is not Houdini (yet) though. The main reason people use Houdini is it's easy to transfer data between modeling (SOP) and physical simulation (DOP) contexts. And they're working on integrating rigging/animation too.

Geometry node is quite a separate thing from what Blender already has. Last time I checked one couldn't even create vertex groups in geometry node, and there was no way to create an armature there either. (Not sure if it changed since I checked though)

Joel_Mckay•6h ago
Indeed, Geometry nodes is new, and I have also found documentation on many operations sparse. That being said, it has proven itself very capable.

The mini labs on Entagma's channel were very helpful for parametric surface operations =3

https://www.youtube.com/@Entagma/videos

2944015603•5h ago
I think you should've be able to access vertex groups from the initial version of geometry nodes. Certainly, at this point, you can dynamically access (and set) arbitrary vertex groups, including loading the names of the vertex groups you want to access from a csv file, or otherwise synthesizing the names using string operations.
Joel_Mckay•5h ago
Indeed, but it is non-obvious how to handle it...

I found the Entagma lab videos very helpful in understanding how to parse geometry with nodes. It is not obvious (unless you already get CLI pipes), but so worth it when things finally work... =3

johnnyanmac•6h ago
That's often a problem with Open Source. Lots of people steering the ship, and it's hard for any de facto captain to really demand a direction. If they do, they may not have the contributors needed to really see it through.

This can be solved with paid contributors, but FOSS organization don't have the most funding out there. So it can be challenging when trying to attract specialized talent.

Joel_Mckay•5h ago
Donated to most projects we find useful, and also tried a few Blender paid plugins.

The other problem is people Cowing anyone that may not agree with their personal opinions how projects should mature. QED: Our karma scores... lol =3

These plugins made Blender usable for a few projects, and I have personally found value in supporting for asset creation:

https://tinynocky.gumroad.com/l/tinyeye

https://sanctus.gumroad.com/l/SLibrary

https://polyhaven.com/plugins/blender

https://artell.gumroad.com/l/auto-rig-pro

https://fbra.gumroad.com/l/Faceit (for Unreal face mocap app with iPhone Pro Lidar)

MrScruff•4h ago
I've only dabbled with Blender but from what I saw of geometry nodes it's quite a long way away from competing in the same space as Houdini. Houdini's biggest single feature and the thing that allows film studios to use it at complexity and scale is the HDA system and there's no real competition for that.
Joel_Mckay•4h ago
Blender has solved several key challenges, but the learning curve is steep given many tutorials are version specific.

I usually recommend these courses to users when the under $20 sale is active.

They cover a lot of Blenders non-intuitive workflows :3

"Complete Blender Creator: Learn 3D Modelling for Beginners"

https://www.udemy.com/course/blendertutorial/

* Basics of low-poly design in blender

"Blender Animation & Rigging: Bring Your Creations To Life"

https://www.udemy.com/course/blender-animation-rigging/

* Some more practice rigging

"The Ultimate Blender 3D Sculpting Course"

https://www.udemy.com/course/blender-3d-sculpting-course/

* Sculpting, Retopology, and VDM brushes

* basic anatomy

"The Ultimate Blender 3D Simulations, Physics & Particles"

https://www.udemy.com/course/blender-simulations-physics-par...

* Shader/Texture basics

* Geometry node basics

* Boid sprites

* Hair and physics simulation

* Camera FX, and post-render filters

* Instructions on how to export your assets to Unity 3D and Unreal game engines

const_cast•3h ago
I think the main difference between OSS like Blender and the competition is it seems like OSS only gets better. Each version is a little bit better, so projects that were once not competitive catch up. Blender is an obvious example, but also look at Krita, or the entire KDE project. These pieces of software age like fine wine: they get more features, they get faster, and they get more stable.

Closed-source software seems to get... stuck. In the best case. Often, they regress: becoming buggier from version to version with less features. I think of Windows and the entire Microsoft suite of applications.

I think one exception is Gnome. Gnome loves removing features more than they love not implementing popular Wayland protocols.

Joel_Mckay•3h ago
With OSS, unless every add-on is part of the build tree or package repo testing... it quickly becomes broken as the ecosystem constantly evolves.

i.e. of the $3k of community Blender plugins/add-ons we evaluated last year, only around 60% are still functional in stable blender releases. Additionally, many built-in core features like Fracture became broken in 4.x due to API permutations, and getting split into its own module.

In a production environment one must version lock Blender for the project. =3

const_cast•3h ago
IMO it's always a good idea to vendor software. But yes, OSS typically moves fast. It's a tradeoff.
MrScruff•3h ago
Blender is definitely getting better compared to things like Maya but I don't think this argument holds for Houdini & Sidefx.
brcmthrowaway•1h ago
What is HDA?
Joel_Mckay•1h ago
Probably means a Houdini Digital Asset which is similar to a maya reference.

At one time there was a Houdini-Engine Open Mesh Effect plugin, but no idea if that project survived.

Cheers =3

mrandish•6h ago
> Assuming genAI won't deliver and make that world disappear entirely, ofc.

I get the concern and the reasoning behind it but having spent most my adult life and career in digital content creative tooling (2D, 3D and video), I believe AI will be short-term disruptive but also a long-term net positive. It should make skilled pros faster and more productive while helping non-pros achieve more of their creative vision on their own.

For over 30 years progress in tools has been about giving creative people the power to realize their visions. To create more and do it faster, cheaper and at higher quality. Of course, they don't always choose to use that power well but the concern that "these new tools are just enabling bottom feeders to create more bad content" has remained a pretty constant refrain since I started in 1989. Even back in the 90s I said "Sure, these new tools will enable 95% more crap but they'll also unleash 5% that's great which wouldn't have existed before." I think that's just as true today as it's always been.

bee_rider•4h ago
I don’t see much risk to creatives and workers (hold the number of workers constant and have them all get more productive, and the world is a better place).

But, I do wonder about smaller teams accomplishing more, which the unit of work-doing-people can be much smaller. The org tree can shrink, probably knock out some levels. Maybe a team of 4 that does the work previously done by 10 finds it easier to just have someone directly interface with the customer, rather than needing a layer of project managers and customer service to receive customers messages and distribute them.

I’d be worried if I was… anywhere above an IC really.

mrandish•4h ago
> Maybe a team of 4 that does the work previously done by 10

Interestingly, I heard similar ratios were being discussed (or feared/lamented) in the mid-90s as computer-based non-linear video editing swept the post-production industry and in live TV production as computer-based all-in-one production switcher, effects, titler systems like the Amiga-based Video Toaster disrupted everything. I still remember hearing about how the Toaster was giving unionized TV stations problems because union rules said the switcher guy wasn't allowed to touch the graphics/titling system or the editing system. So three guys would be standing around one chair depending which tab was up on the Video Toaster screen :-).

Having been around the content creation tooling business for so long has given me perspective. Since at least the late 80s it's been one never-ending disruption. And, despite the constant predictions of jobs lost, today there are far more jobs in content creation than there were then. Of course, the job descriptions, types, skills required, divisions of labor and which roles were more or less valuable have never stopped changing and probably won't. Yes, this ends up displacing some people in some roles. Back in the 90s highly-paid senior video engineers with deep expertise in how to wire up and synchronize analog video systems initially laughed at us "young kids" with our "toy" digital video systems. Then they resented needing to call us in to get computer-based gear installed, interfaced and working. Then they resented that us "new kids" were paid less due to being less senior but becoming increasingly essential. Some of them learned the 'newfangled' digital systems while others didn't and opted to retire or do something else.

Over thirty years later, I'm now the highly-compensated "old guy" with once-invaluable expertise that's depreciating by the minute and proven skills at doing increasingly irrelevant things. The particulars change but the theme remains the same, which is why I don't think the shift to integrating AI in content creation will be significantly different. There will be skills that become less valuable and job types that go away and entire industry sectors which get disintermediated but at the same time new kinds of work, industry sectors and business opportunities will emerge in different places and ways. As always, the new job types won't be 1:1 replacements for the old, and this will cause as much angst as it always has. They'll not only look different and have different kinds of trade-offs, they may even have different business and compensation models.

It's Schumpeter's process of "Creative Destruction" and renewal (https://en.wikipedia.org/wiki/Creative_destruction). The destruction part is fast, loud and shocking as what exists today combusts in flame. The creative renewal part tends to be more gradual, unfamiliar and at first may not even seem related. From time to time, even the definition of "the industry" ends up changing along with the jobs and roles. I'm pretty sure the old-school TV station engineers I learned from wouldn't consider a digital nomad working full-time editing 30 second clips on a mobile device for YouTube, Instagram and TikTok influencers as even being "in the TV business."

mschuster91•4h ago
> It should make skilled pros faster and more productive while helping non-pros achieve more of their creative vision on their own.

The problem is, AI is good enough to replace juniors. That means companies are already cutting positions at that level and some are just itching to ditch intermediates as well once quality improves.

But when juniors and intermediates are all gone... how are the beancounters expecting to get new seniors when the old ones go to retirement or hand in their 2 week notice because they are fed up cleaning up after crap AI?

mrandish•3h ago
Disruption is disruptive and can suck. But it's also not new. Some companies and people will do short-sighted things in response and pay the price. Other job types and roles will simply be disintermediated and those individuals will need to learn new skills and find different kinds of jobs in the evolving landscape - same as always.

I just wrote a longer reply to a sibling comment addressing exactly this: https://news.ycombinator.com/item?id=44574895.

echelon•1h ago
> The problem is, AI is good enough to replace juniors. That means companies are already cutting positions at that level and some are just itching to ditch intermediates as well once quality improves.

Who needs companies? Those sounds like an unnecessary middleman to me.

spookie•4h ago
> *: Assuming genAI won't deliver and make that world disappear entirely, ofc.

The current state of things leaves a lot to be desired. I have yet to see a model, or technique that actually tackles real 3D content creation problems, aside from animation of course.

Most solutions are trying to solve things completely orthogonal to what you really need in the real world. They produce bad meshes, if meshes at all, have bad topology or inconsistent point sampling, and are extremely prone to parroting their training data.

It's kind of incredible how far we have come but seriously, to understand a problem space you need to try to use the existing tools. Only then, do new, and useful ideas pop up that solve real issues. But man, nobody in the space seems to have a clue.

Believe me, 3D modelling software is REALLY good at what it does, it is really hard to try and dethrone human and machine working together in these really well thought out pieces of software.

egeres•3h ago
> Assuming genAI won't deliver and make that world disappear entirely, ofc.

Even if genAI were to kill the cinema/animation back-end industry, I still think 3D software would be needed to continue videogame development, and godot is far behind in modelling/animation capabilities compared to blender. Hopefully they can latch on this in the long term if it happens that AI keeps eating artists lunch.

_bent•5h ago
Custom mesh normals is huge, because this used to be a PITA.

Tugged away at the end is defining custom cameras through OSL, which is a very very interesting feature, because it allows more physically accurate lens emulation, with dramatically more interesting bokeh than the standard perspective camera model: https://www.reddit.com/r/blender/comments/1kehtse/new_camera...

brcmthrowaway•1h ago
If only Blender had an unreal engine tier real time capability!