Enterprise is way too cozy with the big cloud providers, who bought into it and sold it on so heavily.
0: https://fortune.com/2025/08/18/mit-report-95-percent-generat...
The real question is do those unicorns exist or is it all worthless.
> The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.
The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. Large enterprises being bad at integration is a story as long as time. IMO, reading beyond the headline, the report highlights the value of today's AI tools because they are leading to enterprises trying to integrate faster than they normally would.
"AI tools found to be useful, but integration is hard like always" is a headline that would have gotten zero press.
This is likely why there is a lot of push from the top. They have already committed the money now having to justify it.
As someone who has been in senior engineering management, it's helpful to understand the real reason, and this is definitely not it.
First, these AI subscriptions are usually month-to-month, and these days with the AI landscape changing so quickly, most companies would be reluctant to lock in a longer term even if there were a discount. So it's probably not hard to quickly cancel AI spend for SaaS products.
Second, the vast majority of companies understand sunk cost fallacy. If they truly believed AI wouldn't be a net benefit, they wouldn't force people to use it just because they already paid for it. Salaries for engineers are a hell of a lot more than their AI costs.
The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
I'm not at all saying you have to buy into this "FOMO rationale", but just saying "they already paid the money so that's why they want us to use it" feels like a bad excuse and just broadcasts a lack of understanding of how the vast majority of businesses work.
I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy.
There was no need to post this.
> is probably because
I don't mean to be contrary, but these statements stand in opposition, so I'm not sure why you are so confidently weighing in on this.
Also, while I'm sure you've "been in senior engineering management", it doesn't seem like you've been in an organization that doesn't do engineering as it's product offering. I think this article is addressing the 99% of companies that have some amount of engineers, but does not do engineering. That is to say: "My company does shoes. My senior leadership knows how to do shoes. I don't care about my engineering prowess, we do shoes. If someone says I can spend less on the thing that isn't my business (engineering) then yes, I want to do that."
>> is probably because
> I don't mean to be contrary, but these statements stand in opposition
No, they don't. It's perfectly consistent to say one reason is certainly wrong without saying another much more likely reason is definitely right.
Yes, this is the correct answer.
> ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage
But more importantly, this is completely inconsistent with how banks approach any other programming tool or how they approach lifelong learning. They are 100% comfortable with people not learning on the job in just about any other situation.
Both when the money has been actually committed and when it’s usage based.
I have found that companies are rarely rational and will not “leave money on the table”
Alas, many members of the C suite do not exactly fit that description. They just have typed in a prompt or three, marveled that a computer can reply, and fantasize that it's basically a human replacement.
There are going to be a lot of (figurative, incorporated) dead bodies on the floor. But there will also be a few winners who actually understood what they were doing, and the wins will be massive. Same as it was post dot-com.
They have judgement. They can improve what was generated. They can fix a result when it falls short of the objective.
And they know when to give up on trying to get AI to understand. When rephrasing won't improve next word prediction. Which happens when the situation is complex.
I am such a one, and AI isn't useful to me. The answers it gives me are routinely so bad, I can just answer my own questions with a search engine or product documentation faster than I can get the AI to give me something. Often enough I can never get the AI to give me something useful. The current products are shockingly bad relative to the level of hype being thrown about.
For example I have some product ideas in my head for things to 3D print, but I don't know enough about design to come up with the exact mechanisms and hinges for it. I've tried the chatbots but none of them can really tell me anything useful. But once I already know the answer, they can list all kinds of details and know all about the specific mechanisms. But are completely unable to suggest them to me when I don't mention them by name in the prompt.
I said a couple years ago that the big companies would have trouble monetizing it, but they'd still be forced to spend for fear of becoming obsolete.
In a non-start up, bureaucratic companies, these report are there to make cover ups, or basically to cover everyone's ass so no one is doing anything wrong because the report said so.
It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.
The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.
There aren't enough programmers to justify the valuations and capex
agentic ai which is a huge buzz in enterprise feels more like workflow and rpa (again) and people misunderstanding that getting the happy flow working is only 20% of the job.
I thought we'd use it to reduce our graphics department but instead we've begun outsourcing designers to Colombia.
What I actually use it for is to save time and legal costs. For example a client in bankruptcy owes us $20k. Not worth hiring an attorney to walk us through bankruptcy filings. But can easily ask ChatGPT to summarize legal notices and advise us what to do next as a creditor.
sydbarrett74•3h ago
zippyman55•3h ago
bravetraveler•2h ago
Building clusters six servers at a time... that last the order of weeks, appeasing "stakeholders" that are closer to steaks.
Whole lot of empty movement and minds behind these 'investments'. FTE that amounts to contracted, disposed, labor to support The Hype.
lotsofpulp•2h ago
https://en.wikipedia.org/wiki/Constructive_dismissal
>In employment law, constructive dismissal occurs when an employee resigns due to the employer creating a hostile work environment.
No employee is resigning when an employer tells the employee they are terminated due to AI replacing them.
heavyset_go•2h ago
Layoffs and attrition happen for reasons that are not positive, AI provides a positive spin.
lmm•2h ago
No, but some are resigning when they're told their bonus is being cut because they didn't use enough AI.