From the abstract: "The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines." (emphasis mine)
The 11.7% figures is the modeled reduction in "wage value", which appears to be marketplace value of (human) work.
Anything coming from Ayush and Ramesh should be highly scrutinized. Ramesh should stick to studying Camera Culture in the Media Lab.
I will believe a study from MIT when it comes out of CSAIL.
The real advantage AI gives is cover to change current processes. There's a million tiny tasks that could be automated and in aggregate would reduce labor needs by making labor more productive.
AI isn't a feature. Spellcheck is a feature. Templates are a feature. Search is a feature. A database of every paywalled article is a feature. AI can't do anything but it gives cover for features that do.
They know that 11.7% is WAY too precise to report. The truth is it's probably somewhere between 5-15% over the next 20 years and nobody has any idea which side of that range is correct.
>The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines. Analysis shows that visible AI adoption concentrated in computing and technology (2.2% of wage value, approx $211 billion) represents only the tip of the iceberg. Technical capability extends far below the surface through cognitive automation spanning administrative, financial, and professional services (11.7%, approx $1.2 trillion). [https://arxiv.org/abs/2510.25137]
Does the author not know what displacement outcomes are?
It's possible we got 2.2% better quality software by augmenting engineers.
I expect we'll see at least 11.7% <metrixX> improvements in admin, financial, and professional services.
There is likely also a depressive affect on the labor market - there is nuance here and it would be equally disingenuous to believe there will be zero displacement (although, there is a case for more labor participation is administrative bottlenecks / cost are solved, tbd).
Either way this is like a textbook example of zero-sum minded journalist grossly misrepresenting the world.
Products like v0.dev (and gemini-3 with nano banana in general) continue to get better at building website designs that don't look obviously vibe coded.
There’s also models getting more capable (larger share of the GDP) and GDP growing more quickly due to automation of GDP activities. But even without that it’s at least a 2T/year opportunity (assuming the model is even a little accurate).
To me this validates the bull case that is being raised in private equity. The major risks are not if the market or valuations exist but whether it’ll be captured by a few major players or if open models and local inference eat away at centralization.
Those routine functions could have been automated before LLMs.
Usually when theyre not it's due to some sort of corporate dysfunction which is not something LLMs can solve.
- Even without AI most corpos could shed probably 10% of their workforce - or maybe more - and still be about as productive as they are now. Bunch of reasons why that's true, but here are two I can easily think of: (1) after the layoffs work shifts to the people who remain, who then work harder; (2) underperformers are often not let go for a long time or ever because their managers don't want to do the legwork (and the layoffs are a good opportunity to force that to happen).
- It's hard for leadership to initiate layoffs, because doing so seems like it'll make the company look weak to investors, customers, etc. So if you really want to cut costs by shedding 10%+ of your workforce and making the remaining 90% work harder, then you have to have a good story to tell for why you are doing it.
- AI makes for a good story. It's a way to achieve what you would have wanted to achieve anyway, while making it seem like you're cutting edge.
ChrisArchitect•38m ago