Thank you for being so honest Mr CEO. What a great guy.
/s
Just because your AI gives you a solution, doesn't mean my AI will provide the same. Now scale that fact up to different teams and different firms. How are things going to work? Why will it reduce the number of people? Just like Jurassic Park(or working on Linux), once strange unpredictable things start happening, you need more and more people running around to clean things up. They don't know what the fuck they are doing, or how to do it well, cause thats the nature of complex problems. So things spiral.
Most people are just defaulting to - oh AI will have one answer to everything. And we will all agree to that solution. This will never happen and therefore the predictions will all break.
> Click here and you could win $100
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
1. Companies' declining investment in "developing people to senior". It's been declining for decades.
2. Self-education becoming key to "finishing" your education. College can't reasonably provide a complete education with the complexity and pace of software. New frameworks, CI, git, just all sorts of things aren't in curricula. University starts with Von Neumann, bubble sort, & big-O and has to proceed forward from there. Luckily today's kids have infinitely-patient LLMs! And insane amounts of content from youtubers! And infinite distribution! It's easier than ever to put your work out there and have it be seen. Kids can apply to jobs adding links to their portfolios and their open-source and show their chops that way, meaning companies need to lean less on interviews.
> "AI isn't stealing job categories outright — it's absorbing the lowest-skill tasks," Doshay said. "That shifts the burden to universities, boot camps, and candidates to level up faster."
Taking on interns and junior devs used to be part of the deal for tech companies that wanted the best talent. Now they can just look at kids' public portfolios and pluck the best ones.
It's a brave new world built on public personas where everyone is their own CEO and it's not for everyone. That's where the race comes in.
Once companies realize only so many AIs can be overseen by one person, they'll hire anyone and everyone who can babysit AIs to produce what the company needs - the more AI you can babysit the more valuable you are. Companies will become desperate for talent to put the compute to work. Jevons Paradox at full tilt.
Young guns WILL succeed in this environment. They'll learn on their own time and dime. It was never easier thanks to LLMs with infinite patience and youtubers providing deep explanations.
But it's not entry level software engineering. It's seat-of-your-pants learning and moving fast, running and gunning to get a thing built. Quality guardrails like PRs, code review, tests and such are more important than ever - installing and instilling is where you as senior dev can shine.
Quite so. Unless AI can do literally everything, at which point all prognostication is worthless, you can get more done with more people. The entry level jobs just might not be the same jobs that they are today. Which is actually not really much skin off the nose of the entree, as they are by definition not locked into a skillset anyway.
There is an absolutely ridiculous amount of work to be done, always. You can 10x, 100x everyone with a pulse and we still only find more work uncovered. Companies shed staff when the money runs out; the work will never run out.
Even if every CRUD webapp in the world collapses to one bored guy overseeing a fleet of 50000 AIs, as a global society we have fucking loads of work to do. We have PWh of energy capacity to design and install, a million km of high speed rail, hundreds of thousands of square kilometres of hospitals and schools, literally billions of homes to renovate from shacks to houses, forests to replant, moon bases, asteroid mines, generation ships, it goes on and on. If we want it to.
They only way work as a concept runs out is if we as a species decide we want it to (e.g. by giving all the money, aka human time rental credits, to billionaires and refusing to pay for anything they don't personally want), everyone dies, is a slave in the mines, ascends or otherwise doesn't require work to sustain, or if AGI actually happens and happens at scale.
"AI will take the jobs" is a shareholder-fellating euphemism for "we want AI to do enough work to sustain the people who own the AIs without reference to the rest of humanity". Which they were already doing quite handily anyway. Whether they can keep doing it in safety in perpetuity remains to be seen.
The question is when. Robotics is in a worse shape than AI when compared to humans, but the industry is now rapidly integrating modern AI into both the process and the actual products. It's hard to say, but there might be a 'ChatGPT moment' for robotics soon.
If you replace all people at all levels with robots (and the robots and their tasks don't require people to design, maintain or direct) then the "in safety" aspect of the final paragraph will probably become the important part.
If you can, say, design and build and run a railway network entirely automatically then we're well into singularity territory and there's literally no point guessing. The result could equally be infinite luxury space communism or all humans fed into the algae disgestors.
Do you now agree that that is false? Because I think I've shown it to be false. There is nothing unique about humans that precludes robots/AI/inorganics from doing every job a human can better and cheaper at some point in the future.
Also it can't infinitely be cheaper because money is fundamentally based on human time. If you can do everything without humans then the concept of money is fatally wounded. What that would mean is anyone's guess.
I use Claude daily, and I am not saying this as a hot take, or burn. I have been genuinely thinking about how this works in one's brain.
One comparison might be to an oil exec who understands the science of what happens when you add CO2 to the atmosphere, yet continues working to produce more of it. However, that is a somewhat distant horizon. Humans are terrible at that.
AI lab folks are the ones who think that the damage from their products will occur in just 1 to 5 years, and yet they still continue.
My other point here is that it's not just the C-suite, it's also the researchers.
gnabgib•1d ago