It has been three years and these tools can do a considerable portion of my day to day work. Salvage the wreckage? Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work. In fact, it more or less assumes the opposite. The critique is about the economic and organizational story being told around AI, not about whether an individual developer can ship faster today.
Saying “these tools now do a considerable portion of my work” operates on the micro level of personal productivity. Doctorow is operating on the macro level: how firms reframe human labor as “automation,” push humans into oversight and liability roles, and use exaggerated autonomy claims to justify valuations, layoffs, and cost-cutting.
Ironically, the “Wile E. Coyote running off a cliff” metaphor aligns more with the article than against it. The whole “reverse centaur” idea is that jobs don’t disappear instantly; they degrade first. People keep running because the system still sort of works, until the ground is gone and the responsibility snaps back onto humans.
So there’s no contradiction between “this saves me hours a day” and “this is being oversold in ways that will destabilize jobs and business models.” Those two things can be true at the same time. The comment seems to rebut “AI doesn’t work,” which isn’t really the claim being made.
The headline.
If we keep saying this hard enough over and over, maybe model capabilities will stop advancing.
Hey, there's even a causal story here! A million variations of this cope enter the pretraining data, the model decides the assistant character it's supposed to be playing really is dumb, human triumph follows. It's not _crazier_ than Roko's Basilisk.
Ironically, that is also how humans "think" 99.9% of the time.
> Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they are not hoping to make some centaurs.
> This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job.
> Now, AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past. That means that it will “hallucinate” a library called lib.pdf.text.parsing,
I think it is a convenient, palatable, and obviously comforting lie that lots of people right now are telling themselves.
To me, all the ‘nuance’ in this article is just because the coyote in Doctorow has begun looking down but still cannot quite believe it. He is still leaning on the same tropes of statistical autocomplete that have been a mainstay of the fingers-in-ears gang for the last 3 years.
The other argument Doctorow gives for the limits of LLMs is the example of typo-squatting. This isn't an attack that's new to LLMs and, while I don't know if anyone has done a study, I suspect it's already the case in January 2026 that a frontier model is no more susceptible to this than the median human, or perhaps less; certainly in general Claude is less likely to make a typo than I am. There are categories of mistakes it's still more likely to make than me, but the example here is already looking out of date, which isn't promising for the wider argument.
*to be fair, it's clearly not aimed at a technical audience.
Did other technologies get phrased this way? The accounting software is doing my work? The locomotive is doing my work?
can y’all wake up?
Agreed.
> Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
Eh… some people maybe. But history shows nearly every time a tool makes people more efficient, we get more jobs, not less. Jevon’s paradox and all that: https://en.wikipedia.org/wiki/Jevons_paradox
Is this really something you want to have proudly said? Because it makes it sound like your "work" is not very important.
It is you who is the fool if you haven’t managed to use these things to massively accelerate what you can do and if you cannot see the clear trend. Again, it has been three years since chatgpt came out.
This is what every person who's been laid off by AI says. Every single time. People really like to assume that the work they do is important, except companies don't care about important they care about pushing shit out the door, faster and cheaper. Your high level math and business reasoning do not matter when they can just let someone cheaper go wild and deliver faster with no guard rails.
This is explicitly not what I am saying given that I am leading with AI getting close to being able to do much of what is currently my job. I find it hard to imagine a world where we stagnate right where we are and it takes a decade to do anything more aka I cannot imagine a world where a considerable portion of jobs are not automatable soon - and I do not even think it will be shittier.
>Google and Meta control the ad market. Google and Apple control the mobile market,
“Tech companies are monopolies”, proceeds to describe how tech companies compete with each other.
jongjong•29m ago
Avicebron•21m ago
Agreed. I think people would be open to suggestions if you have actionable ways to improve the current socio-economic system.
hackable_sand•10m ago
Avicebron•2m ago
DetectDefect•12m ago
kuerbel•11m ago
The argument isn’t “tech is the problem,” but that autonomy narratives are used to shift risk, degrade labor, and justify valuations without real system-level productivity gains. That’s a critique of incentives and power structures, not of technological progress itself.
In that sense, “don’t blame tech, blame the system” is very close to the article’s point, not opposed to it.
netsharc•9m ago
Yeah, we're back to feudal lords having the power to control society, they can even easily buy governments... Seems like the problem is with neo-liberalist capitalism, without any controls coming from the society (i.e. democractically elected governments) it will maximize exploitation.
ares623•1m ago