frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Salesforce regrets firing 4000 experienced staff and replacing them with AI

https://maarthandam.com/2025/12/25/salesforce-regrets-firing-4000-staff-ai/
136•whynotmaybe•2h ago

Comments

chrisjj•1h ago
> the company overestimated AI’s readiness for real-world deployment

The root problem is they /estimated/.

> “We assumed the technology was further along than it actually was,” one executive said privately

... and /assumed/.

toomuchtodo•1h ago
And there will be no consequences for those who made these decisions.

https://news.ycombinator.com/item?id=42639532

https://news.ycombinator.com/item?id=42639791

chrisjj•16m ago
Perhaps.

Unless people wise up to the fact what's destroying jobs here isn't Artificial Intelligence.

It is natural stupidity.

imglorp•1h ago
Testing? Field trials? Phased deployment?

No, someone just wanted their bonus for being forward-thinking, paradigm-shifting, opex cutters. I'm sure they got it.

mstank•52m ago
In this case I think it came from the very top down — Benioff has been very bullish on AI and they’ve pretty much re-branded behind their Agent Force offerings.

Also probably a part of their go-to-market strategy. If they can prove it internally they can sell it externally.

JoeAltmaier•1h ago
Somebody has to be the brave experimenter that tries the new thing. I'm just glad it was these folk. Since they make no tangible product and contribute nothing to society, they were perhaps the optimal choice to undergo these first catastrophic failed attempts at AI business.
pama•58m ago
Agree on broad strokes, but slack is a useful product.
JohnTHaller•57m ago
They didn't create Slack, they just bought it.
pama•54m ago
Sure. However, the hiccup that salesforce faces will affect slack usage.
brianwawok•56m ago
Salesforce the crm not slack
belter•53m ago
Most disastrous non intuitive UI ever seen...
sznio•49m ago
ever tried teams?
belter•30m ago
Teams is confusing but Slack is gaslighting...
scsh•51m ago
While someone does have to be the first to experiment I think you've implied a bit of a false dichotomy here. Experimentation can be good for sure, but it also doesn't have to involve such extremes. Sucks for the people left who now have to make up for the fact that someone's experiment didn't work out so well.
mdhb•35m ago
I think that as an employee it’s good to have a clear failure case study to point to from a large and credible organisation that this idea your boss has to fire everyone and just LLM everything isn’t going to work the way you expect it to.

The more examples of this going badly we can get together the better.

oulipo2•28m ago
I think the OP was being sarcastic there...
dangoodmanUT•45m ago
Boom, roasted.
cornholio•45m ago
I think it was mostly a branding exercise, Salesforce wanted to signal to its customers that they are on top of this whole AI thing and there is no need to go to some unknown AI startup to "AIfy" their business. So they wanted to capitalize on FOMO / fear of being disrupted while using a bad labor market to improve profitability. They succeeded in this and made news around the world, but maybe not so many new customers.
HarHarVeryFunny•33m ago
Makes no sense - why would Salesforce's customers care if the company is using AI or not, other than when it impacts them (the customer) such as worse customer service.

This just seems a poor decision made by C-suite folk who were neither AI-savvy enough to understand the limits of the tech, nor smart enough to run a meaningful trial to evaluate it. A failure of wishful thinking over rational evaluation.

wlesieutre•31m ago
I figured the messaging is target more at investors than customers
fumeux_fume•4m ago
If you consider the extent to which our economy has become financialized, then you see these decisions have little to do with providing a product for customers but rather a stock for investors.
ilamont•18m ago
It was signaling to Wall Street and the rest of the tech industry. They want to be seen as profit focused and innovation driven.
DonHopkins•29m ago
I'd say "cowardly" not "brave".
Throaway198712•1h ago
Regrets that the cost-benefit analysis didn't work out, not that they fired anyone.
bhewes•1h ago
But have they hired anyone back?
nottorp•58m ago
Why would they, “AI” will be much better in 6 months!
foolswisdom•1h ago
Probably the first time I'm saying this, but this site appears heavily AI written.
nobodyandproud•57m ago
The senior leadership are accountable here. I assume none of them held themselves to task.
justin66•42m ago
“Mistakes were made.”
edgineer•57m ago
I'm aware that "what does Salesforce actually do?" is a joke but I also really don't know what they do and this article didn't help. They... have conversations with customers? What does the AI do?
JohnTHaller•55m ago
They make hideously complicated software to help businesses manage their business. You need consultants to help integrate it and to make any changes to it. The interfaces are convoluted and require learning how they work rather than having any kind of discoverability. Switching to their systems often involves a dip in customer satisfaction. Switching off of their systems is nearly impossible by design.
mr_mitm•46m ago
Sounds like SAP
rwmj•53m ago
We use it as basically a customer-facing bug tracker, except it's absolute garbage even compared to stuff like Jira.
sergiotapia•52m ago
A big chunk of it is like an enterprisey, old TwentyCRM. It connects with everything, and nobody got fired for choosing salesforce. And the decision makers all play golf together.
websiteapi•56m ago
weird - even if AI was literally omnipotent and omniscient, you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do. Unless you're of course, willing to YOLO the entire company on output you haven't actually checked yourself.

for that reason alone humans will always need to be in the loop. of course you can debate how many people you need to the above activity, but given that AI isn't omniscient, nor omnipotent I expect that number to be quite high for the foreseeable future.

one example - I've been vibe coding some stuff, and even though a pretty comprehensive set of tests are passing, I still end up reading all of the code. if I'm being honest some of the decisions the AI makes are a bit opaque to me so I end up spending a bunch of time asking it why (of course there's no real ego there, but bare with me...), re-reading the code, thinking about whether that actually makes sense. I personally prefer this activity/mode since the tests pass (which were written by the AI too), and I know anything I manually change can be tested, but it's not something I could just submit to prod right away. this is just a MVP. I can't imagine delegating if real money/customers were on the line without even more scrutiny.

w4yai•54m ago
> even if AI was literally omnipotent and omniscient, you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do

no no no you don't get it, you would have ANOTHER AI for that

gradus_ad•52m ago
It's not even about humans "needing" to be in the loop, but that humans "want" to be in the loop. AI is like a genius employee who has no ego and no desire to rise up the ranks, forever a peon while more willful colleagues surpass them in the hierarchy.

Until AI gets ego and will of its own (probably the end of humanity) it will simply be a tool, regardless of how intelligent and capable it is.

hnlmorg•46m ago
Humans need to be in the loop for the same reason other humans peer review humans pull requests: we all fuck up. And AI makes just as many mistakes as humans do. They just do so significantly quicker.
undersuit•46m ago
Yes, "Mecha-hitler" has no aspirations. /s
only-one1701•31m ago
This is the opposite of both what the article is saying, and reality
serf•47m ago
>weird - even if AI was literally omnipotent and omniscient, you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do.

one would hope that one ability of an 'omniscient and omnipotent' AI would be greater understanding.

When speaking of the divine (the only typical example of the omniscient and omnipotent that comes to mind) we never consider what happens when God (or whoever) misunderstands our intent -- we just rely on the fact that an All-Being type thing would just know.

I think the understanding of minute intent is one such trait an omniscient and omnipotent system must have.

p.s. what a bar raise -- we used to just be happy with AGI!

danenania•31m ago
That’s because gods are a mythical/supernatural invention. No technology can ever really be omniscient or omnipotent. It will always have limitations.

In reality, even an ASI won’t know your intent unless you communicate it clearly and unambiguously.

krapp•20m ago
Not really a bar raise - many people have assumed that "AGI" would mean essentially omnipotent/omniscient AI since the concept of the technological singularity came into being. Read Kurzweil or Rudy Rucker, there's a reason this sort of thing used to be called the "rapture for nerds."

If anything I've noticed the bar being lowered by the pro-AI set, except for humans, because the prevailing belief is that LLMs must already be AGI but any limitations are dismissed as also being human limitations, and therefore evidence that LLMs are already human equivalent in any way that matters.

And instead of the singularity we have Roko's Basilisk.

Mountain_Skies•37m ago
Move fast and break things. When a black box can be blamed, why care about quality? What we need is EXTREMELY strict liability on harms done by AIs and other black box processes. If a company adopts a black box, that should be considered reckless behavior until proven otherwise. Taking humans out of the loop is a conscious decision they make therefore they should be fully responsible for any mistakes or harms that result.
callc•9m ago
Shhhhh that’s a primary unspoken feature - lack of responsibility
65•31m ago
I've always found it much quicker to just... do the work myself. AI slows me down more than anything.
websiteapi•26m ago
fair. I used to think that too, but I find at least for golang, the sota models write tests way faster than I would be able to. tdd is actually really possible with ai imo. except of course you get the scaffolding implementation (I haven't figured out a way to get models to write tests in a way that ensures the tests actually do something useful without an implementation).
nick486•27m ago
>you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do.

this sort of assumes that most humans actually know what they want to do.

It is very untrue in my experience.

Its like most complaints I hear about AI art. yes, it is generic and bland. just like 90% of what human artists produce.

belter•56m ago
Executive compensation is justified by "...enormous impact leadership decisions have on company outcomes..." yet when those decisions blow up spectacularly, the accountability somehow evaporates.

If your pay is 400 times average employee salary because of your unique strategic vision, surely firing 4000 people based on faulty assumptions should come with proportional consequences?

Or does the high risk, high reward, philosophy only apply to the reward part?

yoyohello13•46m ago
We all know the answer. There is no actual defense of inflated CEO salaries. It’s just the people in power maintaining their power and always has been.
sergiotapia•53m ago
what is the source for this? seems like a random blog?
KaiserPro•38m ago
Yeah I can't see a source for the internal admissions of regret.

If we take out the AI part of this and treat it like any other project, if what they admit is true, it represents a massive failure of judgement and implementation.

I can't see anyone admitting that in public, as it would probably end their career, or should do at least. Especially if a company is a "meritocracy"

saos•52m ago
Salesforce is B2B and a complex software. I wouldn’t expected them to layoff that much support. Surprising. They should be empowering their support staff with AI tools to improve customer experiences.

Though I’m a bit surprised they have that much support staff.

throwaway613745•48m ago
Customer experience is secondary to making the C-suite more money.
gortok•50m ago
What is this site? maarthandam.com? Is it a blog? An AI generated “newspaper”? An internet Newspaper? The menu doesn’t work on mobile, no articles appear to have a by-line, and there’s no link to outside sources to indicate the provenance of these quotes.
nextworddev•47m ago
one shotted vibe coded blog
narmiouh•49m ago
Is it just me or anyone else see that this article has no real references to its claims and the articles look like AI slop.
alexanderchr•39m ago
Yes this reads like vacuous AI slop and and the **randomly bolded** text everywhere is a **dead giveaway**. At this point it's becoming a stronger signal than em-dashes.
herodotus•49m ago
It is impossible to verify anything in this article. For example "In recent internal discussions and public remarks". Where are these public remarks? How did this author get access to internal discussions? I rate this article as clickbait nonsense.
nextworddev•48m ago
This is a misread of Benioff's intent behind his comment lol.

Salesforce has a vested interest in maintaing its seat based licenses, so it's not in favor of mass layoffs.

Internally Salesforce is pushing AgentForce full stop

why-o-why•48m ago
This all feels staged somehow. It feels like some kind of performative BS that I can't quite put my finger on.
softwaredoug•47m ago
For an AI agent to do a good job at customer support, you would need to

1. literally document everything in the product and keep documentation up to date (could be partially automated?)

2. Build good enough search to find those things

3. Be able to troubleshoot / reason / abstract beyond those facts

4. Handle customer information that goes against the assumptions in the core set of facts (ie customers find bugs or don’t understand fundamental concepts about computers)

5. Be prepared to restart the entire conversation when the customer gets frustrated with 1-4 (this is very annoying)

pjc50•47m ago
> declining service quality, higher complaint volumes, and internal firefighting

LLMs are a great technology for making up plausible looking text. When correctness matters, and you don't have a second system that can reliably check it, the output turns out to be unreliable.

When you're dealing with customer support, everyone involved has already been failed by the regular system. So they're an exception, and they're unhappy. So you really don't want to inflict a second mistake on them.

mbfg•46m ago
Maybe where AI needs to take over is at the CEO level.
binary132•46m ago
every single HN comment on these articles makes me doubt both the sentience of my fellow nerds and whether there are any actual human users of this website remaining.
kevin_thibedeau•44m ago
Competent management would have implemented a trial run to evaluate the feasibility of the plan. These sociopaths ensured their own failure by lunging for the prize without realizing they stepped off a cliff.
dangoodmanUT•43m ago
> “We assumed the technology was further along than it actually was,” one executive said privately, reflecting a growing recognition that AI performance in controlled demonstrations did not translate cleanly into real-world customer environments

stop. reading. evals.

Mountain_Skies•43m ago
And when they can't undo their mistake will they accept the consequences, or will they cry to the government that there are no workers available to do the jobs so national policy must be modified to give Salesforce an even larger firehose of candidates to ignore? Companies complain endlessly that there isn't a huge stable of unicorns for them pick and choose from but those 4000 experienced staff were known good workers and they dumped them anyway to chase fantasies. Salesforce will demand the government fix their mistake for them. The larger the company, the more they expect to never have to pay for their mistakes.
TheGRS•39m ago
I bounced out of this article pretty quick after seeing it was generated by AI.
xnx•37m ago
Public company logic:

Firing people = smart cost cutting

Hiring people = strong vote of confidence in continued growth

anshumankmr•21m ago
Ahem did you mean "rightsizing" and "rapid growth"?
matrix12•35m ago
Sauce https://timesofindia.indiatimes.com/technology/tech-news/aft...
delduca•32m ago
This site have zero reputation.
kevinwang•16m ago
Thanks. Link should be changed to this.

Edit: oh wait, this article isn't the source either. It references an article by "The Information", which I assume is https://www.theinformation.com/articles/salesforce-executive... There's also this follow-up: https://www.theinformation.com/articles/story-salesforces-de...

It's paywalled, so I can't verify.

Robdel12•34m ago
I’m surprised, hacker news is not questioning this in the slightest?

Is anyone really buying they laid off 4k people _because_ they really thought they’d replace them with an LLM agent? The article is suspect at best and this doesn’t even in the slightest align with my experience with LLMs at work (it’s created more work for me).

The layoff always smelled like it was because of the economy.

davidgerard•32m ago
The article also reads like it was written with a chatbot.
computerdork•13m ago
Hmm, actually lines up for me at least. It was a pretty big news item a few months ago when Salesforce did this drastic reduction in their Customer Service department, and Marc Benioff raved about how great AI was (you might have just missed it):

  https://www.ktvu.com/news/salesforce-ai-layoffs-marc-benioff
At the time, it was such a big deal to a lot of us because it was a signal what could eventually happen to the rest of us white collar workers.

Of course, it could still happen, as maybe AI systems just need another few years to mature before trying to fully replace jobs like this...

... although, one thing I agree with you is that there isn't much info online on these quotes from Salesforce executives, so could be made up.

EagnaIonat•7m ago
I checked and this appears to be the source.

https://timesofindia.indiatimes.com/technology/tech-news/aft...

It isn't regret, they are trying to sell their Agentforce product.

arnonejoe•32m ago
What swe would want to work there after reading this.
simonw•22m ago
maarthandam.com is a weird website. Recent posts:

    Salesforce regrets firing 4000 experienced staff and replacing them with AI
    December 25, 2025
    New Chennai Café Showcases Professional Excellence of Visually Impaired Chefs
    December 22, 2025
    Employee Who Worked 80 Hour Weeks Files Lawsuit Alleging Termination After Approved Medical Leave
    December 21, 2025
    UPS Sued for Running Holiday Business By Robbing Workers of Wages
    December 18, 2025
    This Poor Man’s Food is A Nutritional Powerhouse that is Often Ignored in Tamil Nadu
    October 5, 2025
    Netizens Mourn as Trump Was Found Alive, Promising Tariffs Instead
    August 31, 2025
Looks like a clickbait farm of some sort?
coliveira•20m ago
The most stupid narrative ever. If AI is so good for productivity, why don't you use it to make your 4000 workers produce even more than other companies? Why you need to fire them, so now you have hands tied to your back, and go back to produce the same amount of software? It is completely obvious that the goal is to fire workers, not to get AI stuff done.
throw123ha71•15m ago
So Salesforce is ahead of Microsoft in wisdom. Nadella is focusing on his grand visions again and is telling dissenters to leave:

https://timesofindia.indiatimes.com/technology/tech-news/mic...

He also uses cultural revolution tactics and uses the young ones against the old. I imagine AI house of cards will collapse soon and he'll be remembered as the person who enshittified Windows after the board fires him.

stego-tech•13m ago
I’d love my old job back at this point. I genuinely miss working with such talented colleagues.

The vibe and the verifier: breaking through scientific barriers with AI

https://renormalize.substack.com/p/the-vibe-and-the-verifier-breaking
1•getnormality•46s ago•0 comments

Facebook Museum-Bringing the End Closer Together

https://networkcultures.org/blog/2025/12/24/facebook-museum/
1•glovink•1m ago•0 comments

Show HN: Gift for Kids – Live Santa AI Video Call

https://callsantatonight.com/christmas-gift-for-kids-santa-ai-video-call
1•s-stude•4m ago•0 comments

Donald Knuth's 2025 Christmas lecture: the Knight's Tours

https://thenewstack.io/donald-knuths-2025-christmas-lecture-the-knights-tours/
1•MilnerRoute•7m ago•0 comments

The Mammoth Pirates – In Russia's Arctic north, a new kind of gold rush

https://www.rferl.org/a/the-mammoth-pirates/27939865.html
1•ece20•8m ago•0 comments

DIY E-Reader Folds Open Like a Book

https://hackaday.com/2025/12/24/diy-e-reader-folds-open-like-a-book/
1•elashri•10m ago•0 comments

Free Speech in Tucson

https://yousaytoday.com/story/3ab31ff0-d955-408a-851d-ac77a9d7c23d
2•mvcalder•10m ago•0 comments

From shoreline to skyscraper: Seashells offer a path to low-carbon concrete

https://techxplore.com/news/2025-12-shoreline-skyscraper-seashells-path-carbon.html
1•PaulHoule•12m ago•0 comments

Available domain names for your next project

https://sneakydomains.com/freebies/pn1y89xgpnmvwdd
3•starf1sh•16m ago•0 comments

Why do we hear the same Christmas songs year after year?

https://text.npr.org/nx-s1-5637477
1•mooreds•16m ago•0 comments

Observability dashboard for an arbitrary LLM langgraph

https://github.com/xbt-a4224j/langgraph-observer
1•mooreds•17m ago•0 comments

AI #148: Christmas Break

https://thezvi.substack.com/p/ai-148-christmas-break
1•paulpauper•18m ago•0 comments

All over the rich world, fewer people are hooking up and shacking up

https://www.economist.com/briefing/2025/11/06/all-over-the-rich-world-fewer-people-are-hooking-up...
2•paulpauper•19m ago•0 comments

Show HN: I treated my brain like a buggy server and wrote a patch (Shi-Mo Model)

https://github.com/317317317apple-a11y/shi-mo-protocol/blob/main/README.md
1•ShiMo_Protocol•20m ago•1 comments

Meta Ads Minimum Daily Budget Calculator

https://fiz-fb-calculator.netlify.app/
1•hafizdhanani•20m ago•1 comments

How I Make These ASCII Pictures and Links to Other Tutorials (2000)

https://web.archive.org/web/20000520115049/http://www.ludd.luth.se/~vk/pics/ascii/junkyard/techst...
1•susam•26m ago•0 comments

Agentic Coding Course

https://agenticoding.ai/
1•ofriw•27m ago•0 comments

The Offline Society

https://www.theofflinesociety.org/
1•sigalor•27m ago•0 comments

Introduction to Agents

https://www.kaggle.com/whitepaper-introduction-to-agents
1•saikatsg•31m ago•0 comments

PEP 686 – Make UTF-8 mode default

https://peps.python.org/pep-0686/
1•tosh•31m ago•0 comments

New Way You Can Discover Asteroids

https://science.nasa.gov/get-involved/citizen-science/new-way-you-can-discover-asteroids/
1•ohjeez•32m ago•0 comments

Keeping Windows and macOS alive past their sell-by date: Part 1

https://www.theregister.com/2025/12/24/freshen_up_old_os/
1•cf100clunk•36m ago•1 comments

Show HN: Why many AI-generated websites don't show up on Google

https://pagesmith.ai/seo-for-ai-generated-sites
1•manu_trustdom•37m ago•1 comments

Sina Hartung Ousted as the CEO of CodeRabbit

https://twitter.com/SinaHartung/status/2004125292676383227
2•dsr12•38m ago•1 comments

Ask HN: What is the international distribution/statistics of HN visitors?

4•KellyCriterion•39m ago•0 comments

Leaving Meta as Engineering Manager after 6.5 years

https://twitter.com/k2xl/status/2004226660141310352
1•k2xl•41m ago•0 comments

EngineAI's T800 humanoid: agile motion, 29-DOF joints and combat-style demos

https://scienceclock.com/engineai-t800-humanoid-robot-martial-arts/
2•akg130522•43m ago•0 comments

Show HN: Bookmarklet shows local- and sessionStorage. e.g. on mobile browser

https://gist.github.com/ulrischa/c4c4b18065cafc17def687eb7a91a6ea
1•ulrischa•45m ago•0 comments

New image sensor breaks optical limits

https://phys.org/news/2025-12-image-sensor-optical-limits.html
1•manidoraisamy•45m ago•0 comments

Why FedRAMP Authorization and CMMC Level 2 Are Now Table Stakes for GovCon AI

https://blog.procurementsciences.com/psci_blogs/why-fedramp-authorization-and-cmmc-level-2-are-no...
3•mooreds•47m ago•0 comments