I agree that we're not about to be all replaced with AGI, that there is still a need for junior eng, and with several of these points.
None of those arguments matter if the C suite doesn't care and keeps doing crazy layoffs so they can buy more GPUs, and intelligence and rationality are way less common than following the cargo cult among that group.
If it doesn’t work, it eventually dies.
I keep seeing and talking to people that are completely high guzzling Kool-Aid straight from a pressure tap.
The level of confirmation bias is absolutely unhinged "I told $AGENTICFOOLLM to do that and WOW this PR is 90% AI!", _ignoring any previous failed attempt_, nor the 10% of humanness needed for the change of what ultimately is a 10 line diff and handwaving any counterpoint away as "you're holding it wrong".
I'm essentially seeing two groups emerging: those all aboard the high velocity hype train and those that are curious about the technology but absolutely appalled about the former's irrational attitude which is tantamount to completely winging it diving head first off of a cliff deeply convinced that of course the tide will rise at just the right time for skulls not to be cracked open upon collision with very big sharp rocks.
Try coding with Claude instead of Gemini. Those that do tell me it is well beyond.
Look at the recent US jobs reports--the draw down was mostly in professional services. According to the chief economist of ADP "Though layoffs continue to be rare, a hesitancy to hire and a reluctance to replace departing workers led to job losses last month."
Of course, correlation is not causation, but everyone white collar person I talk with is saying AI is making them far more productive. It's not a leap to figure out that management sees that as well.
Have you tried search in ChatGPT with o4-mini or o3?
I purchased a small electronic device from Japan recently. The language can be changed to English, but it’s a bit of a process.
Google’s AI just copied a Reddit comment that itself was probably AI generated. It made up instructions that are completely wrong.
I had to find an actual human written web page.
The problem is with more and more AI sloop, less humans will be motivated to write. AGI at least the first generation is going to be an extremely confident entity that refuses to be wrong.
Eventually someone is going to lose a billion dollars trusting it, and it’ll set back AI by 20 years. The biggest issue with AI is it must be right.
It’s impossible for anything to always be right since it’s impossible to know everything.
Only it damn well isn’t. Anywhere. Not even patient reports.
The problem with AI is if it’s right 90% of the time but I have to do all the work anyway to make sure it’s not one of the 10% of times it’s extremely confidently wrong, what use is it to me?
And now this error The project is damaged and cannot be opened due to a parse error. Examine the project file for invalid edits or unresolved source control conflicts.
"Shared" as in shareholder?
"Shared" as in piracy?
"Shared" as in monthly subscription?
"Shared" as in sharing the wealth when you lose your job to AI?
I’m looking at this as the landscape of the tools is changing, so personally anyway, I just keep looking for ways to use those tools to solve problems / make peoples lives easier (by offering new products etc). It’s an enabler rather than a threat, once the perspective is broadened, I feel.
It went round in circles doubting itself. When I challenged it, it would redo or undo too much of its work instead of focussing on what I'm asking it about. It seemed to be desperate to please me, backing down to my challenging it.
Ie depending on it turned me into a junior coder. Overly complex code, jumping to code without enough thought etc
Yes yes I'm holding it wrong
The code they create seems to be creating a mess that also is solved by AI. Huge sprawling messes. Funny that. God help us if we need to clear up these messes if AI dies down
billy99k•6h ago
The problem is that only a fraction of software developer have the ability/skills to work on the hard problems. A much larger percentage will only be able to work on things like CRUD apps and grunt work.
When these jobs are eliminated, many developers will be out of work.
anilgulecha•5h ago
So if I have to make a few 5 year predictions:
1. Key human engineer skills will be to take liabilty for the output produced by agents. You will be responsible for the signoff, and any good/bad that comes from it.
2. Some engineering roles/areas will become a "licensed" play - the way canada is for other engineering disciplines.
3. Compensation at the entry level will be lower, and the expected time to ramp up to productive level will be larger.
4. Careers will meaningfully start only at the senior level. At the junior level, your focus is to learn enough of the fundamentals, patterns and design principles so you reach the senior level and be a net positive in the team.
turbofreak•4h ago
chii•4h ago
I suspect that juniors will not want to do this, because the end result of becoming a scenior is not lucrative enough given the pace of LLM advancement.
123yawaworht456•4h ago
zeta0134•2h ago
It's not really possible for an LLM to pick up on the hidden complexities of the app that real users and developers internalize through practice. Almost by definition, they're not documented! Users "just know" and thus there is no training data to ingest. I'd estimate though that the vast, vast majority of bugs I've been assigned originate from one of these vague reports by a client paying us enough to care.
csomar•2h ago
xarope•1h ago
or this about MS pushing for more internal AI usage, and the resulting hilarity (or tragedy, depending if you are the one having to read the resulting code)? https://news.ycombinator.com/item?id=44404067
danielbln•2h ago
chii•4h ago
which is lower valued, and thus it is economically "correct" to have them be replaced when an appropriate automation method is found.
> When these jobs are eliminated, many developers will be out of work.
like in the past, those developers will need to either move up the value chain, or move out into a different career. This has happened before, and will continue to happen until the day humanity reaches some sort of singularity or post-scarcity.
bugglebeetle•3h ago
Textbook example of why this “economic” form of analysis is naive, stupid, and short-sighted, (as is almost always the case).
AI models will never obtain the ability to completely replace “low value work” (as that is not perfectly definable or able to be defined in advance for all cases), so in a scenario where all engineers devoted to these tasks are let go, what you would end up with is a engineers higher up the value chain being tasked with resolving the problems that result from when the AI fails, underperforms, or the assessment of a task’s value was incorrect. The cumulative effect of this would be a massive drain on the effectiveness of said engineers, as they’re now tasked with context switching from creative, high-value work to troubleshooting opaque, AI code slop.
chii•3h ago
if this were truly the case, then companies that _didn't_ replace the "low value work" by ai and continued to use people will outperform and outcompete. My prediction is entirely predicated on the ability for the LLM to do the replacement.
A second alternative would be that the cost of the "sloppy" ai code is externalized, which is not ideal but the past history has any bearing, externalization of costs is rampant in corporate profit struggles.
csomar•2h ago