Why should we make an exception in this case?
It has been three years and these tools can do a considerable portion of my day to day work. Salvage the wreckage? Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
The piece isn’t claiming that AI tools are useless or that they don’t materially improve day-to-day work. In fact, it more or less assumes the opposite. The critique is about the economic and organizational story being told around AI, not about whether an individual developer can ship faster today.
Saying “these tools now do a considerable portion of my work” operates on the micro level of personal productivity. Doctorow is operating on the macro level: how firms reframe human labor as “automation,” push humans into oversight and liability roles, and use exaggerated autonomy claims to justify valuations, layoffs, and cost-cutting.
Ironically, the “Wile E. Coyote running off a cliff” metaphor aligns more with the article than against it. The whole “reverse centaur” idea is that jobs don’t disappear instantly; they degrade first. People keep running because the system still sort of works, until the ground is gone and the responsibility snaps back onto humans.
So there’s no contradiction between “this saves me hours a day” and “this is being oversold in ways that will destabilize jobs and business models.” Those two things can be true at the same time. The comment seems to rebut “AI doesn’t work,” which isn’t really the claim being made.
The headline.
I was accepting sodapopcan’s premise while responding to them. My joke was intended to be aimed at the posting guides and these little Hackernews traditions. But, it was a bit dismissive toward you, which is a little rude. Sorry.
I don't have much to offer here (and yes, sorry, after I made my snarky remark I realize you had indeed read the article). I recognize AI's capabilities but mainly don't use it primarily for political reasons but also because I just enjoy writing code. I'll sometimes use up the chatgpt free limit using it as a somewhat better search engine (and it's not always better) but there's no way I'm paying for agents, which is everything to do with where the money is going, not the money itself. Of course there are other reasons outside of how AI is used by programmers that would derail the general theme of these threads.
I'm just drawn to these threads for the drama and sometimes it triggers me and I write a snarky throwaway comment. If the discussions, and particularly the companies themselves, could shift to actual societal good it can do and how it is concretely getting there, that would hold my attention. Instead we get Sona etc.
My point is I don’t think a technology that went from chatgpt (cool, useless) to opus-4.5+ in 3 years is obviously being oversold when it says that it can do your entire job beyond being just a useful tool.
I would have been much more interested in reading the article you’re suggesting.
Maybe model capabilities WILL continue to improve rapidly for years to come, in which case, yes, at some point it will be possible to replace most or all white collar workers. In that case you are probably correct.
The other possibility is that capabilities will plateau at or not far above current levels because squeezing out further performance improvements simply becomes too expensive. In that case Cory Doctorow's argument seems sound. Currently all of these tools need human oversight to work well, and if a human is being paid to review everything generated by the AI, as Doctorow points out, they are effectively functioning as an accountability sink (we blame you when the AI screws up, have fun.)
I think it's worth bearing in mind that Geoffrey Hinton (infamously) predicted ten years ago that radiologists would all be out of a job in five years, when in fact demand for radiology has increased. He probably based this on some simple extrapolation from the rapid progress in image classification in the early 2010s. If image classification capabilities had continued to improve at that rate, he would probably have been correct.
[1] https://arxiv.org/html/2405.21015v1 [2] https://en.wikipedia.org/wiki/Surge_AI
Would you call something that could replace your labor "spicy auto complete"? He also evokes nfts and blockchain, for some reason. To me this phrasing makes it sound like he thinks they are damn near useless.
If we keep saying this hard enough over and over, maybe model capabilities will stop advancing.
Hey, there's even a causal story here! A million variations of this cope enter the pretraining data, the model decides the assistant character it's supposed to be playing really is dumb, human triumph follows. It's not _crazier_ than Roko's Basilisk.
Ironically, that is also how humans "think" 99.9% of the time.
> Think of AI software generation: there are plenty of coders who love using AI. Using AI for simple tasks can genuinely make them more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they are not hoping to make some centaurs.
> This is another key to understanding – and thus deflating – the AI bubble. The AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can’t do your job.
> Now, AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past. That means that it will “hallucinate” a library called lib.pdf.text.parsing,
I think it is a convenient, palatable, and obviously comforting lie that lots of people right now are telling themselves.
To me, all the ‘nuance’ in this article is just because the coyote in Doctorow has begun looking down but still cannot quite believe it. He is still leaning on the same tropes of statistical autocomplete that have been a mainstay of the fingers-in-ears gang for the last 3 years.
I'm working directly with these tools and have several colleagues who do as well. Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it, but it's just not materializing that much in my day-to-day outside of creating the most basic code/scaffolding where I then have to go back and fix/correct because there are subtle errors. It's actually hard to tell if my productivity is better because I have to spend time fixing the generated output.
Maybe it would help to recognize that your experience is not the norm. And if the tech were there, where are the actual profits from selling it? It's increasingly more common for it to be "under development" for selling to consumers or only deployed as a chatbot in scenarios where it's acceptable to be wrong and warnings to verify output yourself.
If my other replies come off as aggro, I apologize - I definitely can struggle with moderating tone in comments to reflect how I actually feel.
> Our collective anecdotal experience keeps coming back to the conclusion that the tech just isn't where the marketing is on its capabilities. There's probably some value in the tech here, which leads others like yourself to be so completely sold on it
Let me be clear - I am not so completely sold on the current iteration. But I think there has been a significant improvement even since the midpoint of last year, the number of diffs I am returning mostly unedited is sharply increasing, and many people I am talking to are privately telling me they are no longer authoring any code themselves except for minor edits in diffs. Given that this has only been 3 years since chatgpt, really I am just looking at the curve and saying ‘woah.’
It's unfortunately the case that even understanding what AI can and cannot do has become a matter of, as you say, "ideological world view". Ideally we'd be able to discuss what's factually true of AI at the beginning of 2026, and what's likely to be true within the next few years, independently of whether the trends are good for most humans or what we ought to do about them. In practice that's become pretty difficult, and the article to which we're all responding does not contribute positively.
The other argument Doctorow gives for the limits of LLMs is the example of typo-squatting. This isn't an attack that's new to LLMs and, while I don't know if anyone has done a study, I suspect it's already the case in January 2026 that a frontier model is no more susceptible to this than the median human, or perhaps less; certainly in general Claude is less likely to make a typo than I am. There are categories of mistakes it's still more likely to make than me, but the example here is already looking out of date, which isn't promising for the wider argument.
*to be fair, it's clearly not aimed at a technical audience.
Did other technologies get phrased this way? The accounting software is doing my work? The locomotive is doing my work?
e:tone
People are really, really, really good at not seeing what they don't want to see.
Agreed.
> Unfortunately I think that many people’s jobs are essentially in the “Coyote running off a cliff but not realizing it yet” phase or soon to be.
Eh… some people maybe. But history shows nearly every time a tool makes people more efficient, we get more jobs, not less. Jevon’s paradox and all that: https://en.wikipedia.org/wiki/Jevons_paradox
Is this really something you want to have proudly said? Because it makes it sound like your "work" is not very important.
It is you who is the fool if you haven’t managed to use these things to massively accelerate what you can do and if you cannot see the clear trend. Again, it has been three years since chatgpt came out.
This is what every person who's been laid off by AI says. Every single time. People really like to assume that the work they do is important, except companies don't care about important they care about pushing shit out the door, faster and cheaper. Your high level math and business reasoning do not matter when they can just let someone cheaper go wild and deliver faster with no guard rails.
This is explicitly not what I am saying given that I am leading with AI getting close to being able to do much of what is currently my job. I find it hard to imagine a world where we stagnate right where we are and it takes a decade to do anything more aka I cannot imagine a world where a considerable portion of jobs are not automatable soon - and I do not even think it will be shittier.
And yet you did not read this essay, or at least did not understand it.
Whatever LLM you used to summarize it has let you down. I wonder how often that is happening in your day to day work, perhaps that's why you feel your job is at risk.
Great. Good faith all around. Take care.
>Google and Meta control the ad market. Google and Apple control the mobile market,
“Tech companies are monopolies”, proceeds to describe how tech companies compete with each other.
Now that market data is made available by brokers and decisions can be colluded based on such data.
My prediction is that this will keep going all the way to the AGI stage. Someone will release (or leak) an AGI capable model that’s able to design AI chips, as well as the Fabs needed to build them, as well as robots to build and operate the Fabs and robot factories and raw material mines and refineries.
I believe AGI will require the ability to self tune its own Neutral network coefficients which the current tech cannot do because I can’t deduce it’s own errors. Oh sorry “hallucinations”. Developing brains learn from both pain and verbal feedback (no, not food!) etc.
It’s an interesting problem where just telling a LLM model it’s wrong is not enough to adjust Billions of parameters with.
Even worse, they've bet against the math not advancing. If it gets significantly more power-efficient, which literally could happen tomorrow if the right paper goes up on arxiv, maybe a 10 year old laptop could give "good enough" results. All those data centers are now trash and your companies are now worth a negative trillion dollars.
I think all of these factors are completely independent of whether AI works or not, or how well it works. Personally, I don't care if it replaces programmers: get another job. I just have experienced it, and it is at this point mediocre.
Of course I am not using the bleeding edge, and I am not privy to the top secret insider stuff which may well be orders of magnitude better. But if they've got it, why would they keep it a secret when people are desperate to give them money? If they're hiding it, it's something that they know that somebody could analyze and knock off, and then it's a race for the bottom again.
In a race for the bottom, we all win. Except the people and economies who bet their lives on it being a race to the top.
jongjong•2h ago
Avicebron•2h ago
Agreed. I think people would be open to suggestions if you have actionable ways to improve the current socio-economic system.
hackable_sand•2h ago
DetectDefect•2h ago
kuerbel•2h ago
The argument isn’t “tech is the problem,” but that autonomy narratives are used to shift risk, degrade labor, and justify valuations without real system-level productivity gains. That’s a critique of incentives and power structures, not of technological progress itself.
In that sense, “don’t blame tech, blame the system” is very close to the article’s point, not opposed to it.
netsharc•2h ago
Yeah, we're back to feudal lords having the power to control society, they can even easily buy governments... Seems like the problem is with neo-liberalist capitalism, without any controls coming from the society (i.e. democractically elected governments) it will maximize exploitation.
ares623•1h ago
SideburnsOfDoom•1h ago
If by "people" you mean "Cory Doctorow, the author of the article", then you really don't know anything about their work.
For example, he coined the term "enshitifacation" and talks often about the "enshitogenic policy environment" that gives rise to it.
thundergolfer•1h ago
Read the article.
sodapopcan•1h ago