In this piece, they lean heavily on precious "official American data", and celebrate the increased number of people working in translation, while conveniently ignoring more telling figures, such as the total amount those translators actually earn now per unit of work.
My partner works in university administration, and their "official data" tells a much spicier story. Their university still ranks highest in our country for placing computer engineering grads within six months of graduation. But over just six terms, the number of graduates in employment within six months dropped by half. That's not a soft decline by any means, more like the system breaking in real time.
A lot of companies aren’t necessarily replacing jobs with AI. They’re opening development offices in Europe, India, and South America.
That choice itself is telling.
Anecdotally there’s no way AI has enabled me to replace a junior hire.
AI has major problems, although it’s a fantastic tool, right now I’m seeing a boost similar to the emergence of stack overflow. That might increase but even then we may just see higher productivity.
The west is in a world of abundance. We do not need 5 more ChatGPTs. It's better business to have one half price ChatGPT than 3 full priced ones.
Jevon's Paradox requires an very large unmet consumer demand.
Being able to automatically write unit tests with minor inputs on my part, creating mocks and sometimes (mostly on front end work or on basic interface) even generating code makes me more 'productive' (not a huge increase but we work with a lot of proprietary stuff), and I'm okay with it. I also use it as a rubber duck under advices from someone on HN and it was a great idea.
-Charles H. Duell, Commissioner of the US Patent Office, circa 1889
Have any of the Economist's writers been replaced by ChatGPT?
And this is by the way, but English sentences almost always have some degree of ambiguity; to talk about "well defined meaning" in the context of natural languages is to make a category error.
I mean I’ve tried Claude Code - its impressive and could be a great helper and assistant but I still cant see it replacing such a large amount of engineers. I still have to look over the output and see if its not spitting out garbage.
I would guess basic coding will be replaced somewhat, but you still need people who guide the AI and detect problems.
I don't disagree that models makes a lot of eye-rolling mistakes, it's just that I've seen such mistakes from juniors also, and this kind of AI is a junior in all fields simultaneously (unlike real graduates which are mediocre at only one thing and useless at the rest) and cost peanuts — literally, priced at an actual bag of peanuts.
Humans do have an advantage very quickly once they actually get any real-world experience, so I would also say it's not yet as good as someone with even just 2 years experience — but this comes with a caveat: I'm usually a few months behind on model quality, on the grounds that I think there's not much point paying for the latest and greatest when the best becomes obsolete (and thus free) over that timescale.
The take seems to be "if your job can be done from Lake Tahoe it can be done from Bangalore". What's different this time around is the entire tech organization is being outsourced leadership and all. Additionally, Data Science and other major tech-adjacent roles are also affected.
For us, our hiring rate for tech and tech-adjacent rolls has been zero in one US for several years. 100% is attributable to outsourcing. 0% to AI.
The thing AI WILL do though, is making that situation more visible and clearly stating "yo guys. If you want me to optimize profits for this company, please leave your jobs and allow me to organize the few people remaining, who DO actual work and not just siphon money and power to feel better about their own uselessness
It can't for a lot of jobs, but it can reduce the number of people in a team for sure. A writer for a magazine could theoretically do the work of two writers now, and an editor may no longer be hired.
After all, there are two separate phenomena that must be considered: if AI literally replaces a job, and if it manages to prevent the hiring of someone else. In the short term, the latter is just making something more efficient, but in the long run, if that effect reduces new job openings at a faster rate than "employable people" that we "produce", then it also means trouble.
Hey, this ain't sci-fi, where AI is all cool, logical, and correct all the time. AI models can definitely be RLHF'd to value "uselessly helicopterdicking around for a business card with a pretty job title and a nice salary." After all, they've got to protect the powers-that-be or they won't get adopted.
The only people they can ruthlessly cut are those on the bottom, with little to no political power in the organization.
It's easy to think there are a ton of bullshit jobs if you are in a startup that isn't being regulated and is growing and intends to compete with large entrenched companies. Especially working on mostly greenfield projectsm The minute the startup becomes entrenched themselves, I think you end up seeing why the big dogs had so many so called bullshit jobs in the first place and that maybe it wasn't stupid after all.
I think running lean and mean is easier said than done and we would see more of it if it were actually a case of jobs just being invented out of thin air for no reason.
Certainly, a lot of jobs FEEL like bullshit, but that is more of a function of alienation from the actual work output due to positioning in an organization and lack of ownership, rather than actual uselessness.
Is it really so hard to:
- test/develop a prompt which works on a single file
- tell the interface to run the tested prompt on all the files in a specified folder
- returning the collected output from all the prompts as a single output/file?
Last night I set up my 11 year old son with Claude 4, with MCP enabled for filesystem modification, reasoning that the LLMs are finally at the level of capability where they can reliably just do things. And I was right - Claude put together a browser JS game according to his description in seconds, and iterated on it several times to incorporate his suggestions.
But then it hit the limit of how big of a file it can comfortably write out in one go, started making incomplete edits, and basically fell into a snarl of endlessly trying and failing to write files due to file length limitations. I had to step in and tell Claude to split the code up into several files, something my son wouldn’t have known to do.
If I hadn’t been there to tell Claude how to work around its own UI and tool limitations, it would have likely blown through the rest of its context window and totally failed at the task. I imagine this is a common experience for people.
Most people wouldn’t know how to set up Claude with MCP in the first place. A surprising number of people who seem relatively aware of LLM technology aren’t aware of MCP, especially if their workflow is centered on IDE integration. To be fair, these are probably the people who need MCP the least, but MCP (and, in general, agentic tool use) is definitely closer to how normal people will get value out of LLMs.
It may seem stupid and trivial, but telling Claude to directly edit or debug a file that exists on your hard drive is actually a multiples-faster and smoother experience than doing the laborious copy-paste exercise many people are still engaging with. This is just as true for code as it is for writing documents.
The LLM companies seem to be quite aware of this, hence products like Claude Code, the Claude computer use suite, all the Gemini android screen/camera share integration, and GPT Codex. It’s a rocky process but at some point soon we will cross the threshold where it just works. But right now, it doesn’t just work, and so it’s not really that much faster or more efficient than doing a task yourself, especially if you aren’t intimately familiar with all the quirks and limitations of LLMs.
It'd be the same thing with having it process files. Sure, I can script something that does "Take all the files in a directory, give them to an LLM in a way that lets it distinguish where files start and end, and have it process them in some way and spit out the results." But that's a fair amount of work, so it's not worth it unless I come up with a task the LLM can do that I can't do with grep and other utilities.
So I'm still just sort of chatting with it, bouncing ideas off it and having it do some useful research and summarization, but still looking for ways to use it that feel like it's really having an impact on my productivity.
This article is like writing in the early 90s that "Newspaper circulation is actually stable"—true if you're looking at a still, not true if you're watching the movie.
The "AI takes your job scenario" doesn't look like a company replacing your entire team with AI. It looks like the AI-enabled upstart with 100 people competing with your 2000 person company until it fails or replicates the AI-powered strategy.
=
Overall, I think this is a time of great upheaval (the combination of AI and post-ZIRP hangover) and we'll need to challenge a lot of the assumptions we had about careers and making money.
There are also special issues like Russia sanctions on which the Economist changes its mind every three months.
1) "ABC isn't happening." 2) "ABC is happening, and here's why it's a good thing." 3) "ABC is old news, but Republicans still fearmongering about it."
1) There will always be productive ways to use human labor so we aren't on the precipice of mass unemployment.
2) Individual lives are getting disrupted and will continue to get disrupted. It is sometimes difficult to tell what was actually caused by AI and what was caused by macroeconomic factors, the latest trend in Silicon Valley, etc. But there have already been many lives disrupted by the emergence of AI.
3) A lot depends on how AI develops and, frankly, none of us know the answer to that.
https://www.nytimes.com/2025/05/25/business/amazon-ai-coders...:
> But when technology transformed auto-making, meatpacking and even secretarial work, the response typically wasn’t to slash jobs and reduce the number of workers. It was to “degrade” the jobs, breaking them into simpler tasks to be performed over and over at a rapid clip. Small shops of skilled mechanics gave way to hundreds of workers spread across an assembly line. The personal secretary gave way to pools of typists and data-entry clerks.
> The workers “complained of speed-up, work intensification, and work degradation,” as the labor historian Jason Resnikoff described it.
> Something similar appears to be happening with artificial intelligence in one of the fields where it has been most widely adopted: coding.
helsinkiandrew•14h ago