AI would need to 1. perform better than a person in a particular role, and 2. do so cheaper than their total cost, and 3. do so with fewer mistakes and reduced liability.
Humans are objectively quite cheap. In fact for the output of a single human, we're the cheapest we've ever been in history (particularly in relation to the cost of the investment in AI and the kind of roles AI would be 'replacing.')
If there is any economic shifts, it will be increases in per person efficiency, requiring a smaller workforce. I don't see that changing significantly in the next 5-10 years.
Anyway, I appreciate the response. I don't know how old you are, but I'm kind of old. And I've noticed that I've become much more cynical and pessimistic, not necessarily for any good reasons. So maybe it's just that.
I disagree with that statement when it comes to software developers. They are actually quite expensive. The typically enter the workforce with 16 years of education (assuming they have a college degree), and may also have a family and a mortgage. They have relatively high salaries, plus health insurance, and they can't work when they're sleeping, sick or on vacation.
I once worked for a software consultancy where the owner said, "The worst thing about owning this kind of company is that all my capital walks out the door at six p.m."
AI won't do that. It'll work round the clock if you pay for it.
We do still need a human in the loop with AI. In part, that's to check and verify its work. In part, it's so the corporate overlords have someone to fire when things go wrong. From the looks of things right now, AI will never be "responsible" for its own work.
It does still need an experienced human to review its work, and I do regularly find issues with its output that only a mid-level or senior developer would notice. For example, I saw it write several Python methods this week that, when called simultaneously, would lead to deadlock in an external SQL database. I happen to know these methods WILL be called simultaneously, so I was able to fix the issue.
In existing large code bases that talk to many external systems and have poorly documented, esoteric business rules, I think Claude and other AIs will need supervision from an experienced developer for at least the next few years. Part of the reason for that is that many organizations simply don't capture all requirements in a way that AI can understand. Some business rules are locked up in long email threads or water cooler conversations that AI can't access.
But, yeah, Claude is already acting like a team of junior/mid-level developers for me. Because developers are highly paid, offloading their work to a machine can be hugely profitable for employers. Perhaps, over the next few years, developers will become like sys admins, for whom the machines do most of the meaningful work and the sys admin's job is to provision, troubleshoot and babysit them.
I'm getting near the end of my career, so I'm not too concerned about losing work in the years to come. What does concern me is the loss of knowledge that will come with the move to AI-driven coding. Maybe in ten years we will still need humans to babysit AI's most complicated programming work, but how many humans will there be ten years from now with the kind of deep, extensive experience that senior devs have today? How many developers will have manually provisioned and configured a server, set up and tuned a SQL database, debugged sneaky race conditions, worked out the kinks that arise between the dozens of systems that a single application must interact with?
We already see that posts to Stack Overflow have plummeted since programmers can simply ask ChatGPT or Claude how to solve a complex SQL problem or write a tricky regular expression. The AIs used to feed on Stack Overflow for answers. What will they feed on in the future? What human will have worked out the tricky problems that AI hasn't been asked to solve?
I read a few years ago that the US Navy convinced Congress to fund the construction of an aircraft carrier that the Navy didn't even need. The Navy's argument was that it took our country about eighty years to learn how to build world-class carriers. If we went an entire generation without building a new carrier, much or all of that knowledge would be lost.
The Navy was far-sighted in that decision. Tech companies are not nearly so forward thinking. AI will save them money on development in the short run, but in the long run, what will they do when new, hard-to-solve problems arise? A huge part of software engineering lies in defining the problem to be solved. What happens when we have no one left capable of defining the problems, or of hammering out solutions that have not been tried before?
- talking to people to understand how to leverage their platform and to get them to build what I need
- work in closed source codebases. I know where the traps and the foot guns are. Claude doesn’t
- telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem
In short, I can think and I can learn. LLMs can’t.
You’re right it wouldn’t replace everyone, but businesses will need less people to maintain.
This one is huge. I’ve personally witnessed many situations where a multi-million dollar mistake was avoided by a domain expert shutting down a bad idea. Good leadership recognizes this value. Bad leadership just looks at how much code you ship
That said, in the meantime, I'm not confident that I'd be able to find another job if I lost my current one, because I not only have to compete against every other candidate, I also need to compete against the ethereal promise of what AI might bring in the near future.
uberman•1h ago
johnwheeler•1h ago
I think the flaws are going to be solved for, and if that happens, what do you think? I do believe there needs to be a human in the loop, but I don't think there needs to be humans, plural. Eventually.
I believe this is denial. The statement that the best AI can't be reliable enough to do a modest refactoring is not correct. Yes, it can. What it currently cannot do is write a full app from start to finish, but they're working on longer task execution. And this is before any of the big data centers have even been built. What happens then? You get the naysayers that say, "Well, the scaling laws don't apply," but there's a lot of people who think they do apply.
ThrowawayR2•1h ago
johnwheeler•13m ago
I don't have to write code anymore, and the code that's coming out needs less and less of my intervention. Maybe I'm just much better at prompting than other people. But I doubt that (although I do think I'm probably better at prompting than most).
The two things I hear are:
1. You'll always need a human in the loop
2. AI isn't any good at writing code
The first one sounds more plausible, but it means less programmers over time.