The more examples of this going badly we can get together the better.
This just seems a poor decision made by C-suite folk who were neither AI-savvy enough to understand the limits of the tech, nor smart enough to run a meaningful trial to evaluate it. A failure of wishful thinking over rational evaluation.
for that reason alone humans will always need to be in the loop. of course you can debate how many people you need to the above activity, but given that AI isn't omniscient, nor omnipotent I expect that number to be quite high for the foreseeable future.
one example - I've been vibe coding some stuff, and even though a pretty comprehensive set of tests are passing, I still end up reading all of the code. if I'm being honest some of the decisions the AI makes are a bit opaque to me so I end up spending a bunch of time asking it why (of course there's no real ego there, but bare with me...), re-reading the code, thinking about whether that actually makes sense. I personally prefer this activity/mode since the tests pass (which were written by the AI too), and I know anything I manually change can be tested, but it's not something I could just submit to prod right away. this is just a MVP. I can't imagine delegating if real money/customers were on the line without even more scrutiny.
no no no you don't get it, you would have ANOTHER AI for that
Until AI gets ego and will of its own (probably the end of humanity) it will simply be a tool, regardless of how intelligent and capable it is.
one would hope that one ability of an 'omniscient and omnipotent' AI would be greater understanding.
When speaking of the divine (the only typical example of the omniscient and omnipotent that comes to mind) we never consider what happens when God (or whoever) misunderstands our intent -- we just rely on the fact that an All-Being type thing would just know.
I think the understanding of minute intent is one such trait an omniscient and omnipotent system must have.
p.s. what a bar raise -- we used to just be happy with AGI!
In reality, even an ASI won’t know your intent unless you communicate it clearly and unambiguously.
I recently came to this realization as well, and it now seems so obvious. I feel dumb for not realizing it sooner. Is there any good writing or podcast on this topic?
If anything I've noticed the bar being lowered by the pro-AI set, except for humans, because the prevailing belief is that LLMs must already be AGI but any limitations are dismissed as also being human limitations, and therefore evidence that LLMs are already human equivalent in any way that matters.
And instead of the singularity we have Roko's Basilisk.
this sort of assumes that most humans actually know what they want to do.
It is very untrue in my experience.
Its like most complaints I hear about AI art. yes, it is generic and bland. just like 90% of what human artists produce.
If your pay is 400 times average employee salary because of your unique strategic vision, surely firing 4000 people based on faulty assumptions should come with proportional consequences?
Or does the high risk, high reward, philosophy only apply to the reward part?
If we take out the AI part of this and treat it like any other project, if what they admit is true, it represents a massive failure of judgement and implementation.
I can't see anyone admitting that in public, as it would probably end their career, or should do at least. Especially if a company is a "meritocracy"
Though I’m a bit surprised they have that much support staff.
Salesforce has a vested interest in maintaing its seat based licenses, so it's not in favor of mass layoffs.
Internally Salesforce is pushing AgentForce full stop
1. literally document everything in the product and keep documentation up to date (could be partially automated?)
2. Build good enough search to find those things
3. Be able to troubleshoot / reason / abstract beyond those facts
4. Handle customer information that goes against the assumptions in the core set of facts (ie customers find bugs or don’t understand fundamental concepts about computers)
5. Be prepared to restart the entire conversation when the customer gets frustrated with 1-4 (this is very annoying)
LLMs are a great technology for making up plausible looking text. When correctness matters, and you don't have a second system that can reliably check it, the output turns out to be unreliable.
When you're dealing with customer support, everyone involved has already been failed by the regular system. So they're an exception, and they're unhappy. So you really don't want to inflict a second mistake on them.
It's my sincerely held opinion that we're fostering a culture here that ignores the "human impact" of the technology that we're rushing to adopt.
I'm well aware that many members of this community have achieved "success" through software. This includes the rapid adoption of new computing paradigms, new technology stacks, new frameworks, etc.
I am fortunate to be employed. But around me, when I step out of my house, it's painful. People are hurting. They're unemployed. They're depressed. And the younger generation is even worse. They can't even afford to dream.
I live in a corporate world of forced smiles and fake enthusiasm. I would hate for that same culture to take root here. We need to be able to express significant doubt, or even cynicism against AI, without fear of backlash.
stop. reading. evals.
Firing people = smart cost cutting
Hiring people = strong vote of confidence in continued growth
Edit: oh wait, this article isn't the source either. It references an article by "The Information", which I assume is https://www.theinformation.com/articles/salesforce-executive... There's also this follow-up: https://www.theinformation.com/articles/story-salesforces-de...
It's paywalled, so I can't verify.
Both the OP article and this Times of India article appear to be AI-generated summaries of the original article.
Craziness!
Is anyone really buying they laid off 4k people _because_ they really thought they’d replace them with an LLM agent? The article is suspect at best and this doesn’t even in the slightest align with my experience with LLMs at work (it’s created more work for me).
The layoff always smelled like it was because of the economy.
https://www.ktvu.com/news/salesforce-ai-layoffs-marc-benioff
At the time, it was such a big deal to a lot of us because it was a signal what could eventually happen to the rest of us white collar workers.Of course, it could still happen, as maybe AI systems just need another few years to mature before trying to fully replace jobs like this...
... although, one thing I agree with you is that there isn't much info online on these quotes from Salesforce executives, so could be made up.
https://timesofindia.indiatimes.com/technology/tech-news/aft...
It isn't regret, they are trying to sell their Agentforce product.
Salesforce regrets firing 4000 experienced staff and replacing them with AI
December 25, 2025
New Chennai Café Showcases Professional Excellence of Visually Impaired Chefs
December 22, 2025
Employee Who Worked 80 Hour Weeks Files Lawsuit Alleging Termination After Approved Medical Leave
December 21, 2025
UPS Sued for Running Holiday Business By Robbing Workers of Wages
December 18, 2025
This Poor Man’s Food is A Nutritional Powerhouse that is Often Ignored in Tamil Nadu
October 5, 2025
Netizens Mourn as Trump Was Found Alive, Promising Tariffs Instead
August 31, 2025
Looks like a clickbait farm of some sort?https://timesofindia.indiatimes.com/technology/tech-news/mic...
He also uses cultural revolution tactics and uses the young ones against the old. I imagine AI house of cards will collapse soon and he'll be remembered as the person who enshittified Windows after the board fires him.
A search found an similar article from Times of India which credits The Information, there's no good way for non-subscribers to search it.
chrisjj•3h ago
The root problem is they /estimated/.
> “We assumed the technology was further along than it actually was,” one executive said privately
... and /assumed/.
toomuchtodo•2h ago
https://news.ycombinator.com/item?id=42639532
https://news.ycombinator.com/item?id=42639791
chrisjj•1h ago
Unless people wise up to the fact what's destroying jobs here isn't "Artificial Intelligence".
It is simply natural stupidity.
imglorp•2h ago
No, someone just wanted their bonus for being forward-thinking, paradigm-shifting, opex cutters. I'm sure they got it.
mstank•2h ago
Also probably a part of their go-to-market strategy. If they can prove it internally they can sell it externally.