frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
163•theblazehen•2d ago•47 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
674•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
950•xnx•20h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
123•matheusalmeida•2d ago•33 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
22•kaonwarb•3d ago•19 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
58•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
232•isitcontent•14h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
225•dmpetrov•15h ago•118 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•16h ago•145 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
495•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
383•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•182 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
289•eljojo•17h ago•175 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•21h ago•279 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
32•jesperordrup•4h ago•16 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
20•bikenaga•3d ago•8 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
17•speckx•3d ago•7 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•7 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
91•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
258•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1070•cdrnsf•1d ago•446 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
36•gmays•9h ago•12 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•70 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
150•SerCe•10h ago•142 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
186•limoce•3d ago•100 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments
Open in hackernews

Salesforce regrets firing 4000 experienced staff and replacing them with AI

https://maarthandam.com/2025/12/25/salesforce-regrets-firing-4000-staff-ai/
193•whynotmaybe•1mo ago

Comments

chrisjj•1mo ago
> the company overestimated AI’s readiness for real-world deployment

The root problem is they /estimated/.

> “We assumed the technology was further along than it actually was,” one executive said privately

... and /assumed/.

toomuchtodo•1mo ago
And there will be no consequences for those who made these decisions.

https://news.ycombinator.com/item?id=42639532

https://news.ycombinator.com/item?id=42639791

chrisjj•1mo ago
Perhaps.

Unless people wise up to the fact what's destroying jobs here isn't "Artificial Intelligence".

It is simply natural stupidity.

imglorp•1mo ago
Testing? Field trials? Phased deployment?

No, someone just wanted their bonus for being forward-thinking, paradigm-shifting, opex cutters. I'm sure they got it.

mstank•1mo ago
In this case I think it came from the very top down — Benioff has been very bullish on AI and they’ve pretty much re-branded behind their Agent Force offerings.

Also probably a part of their go-to-market strategy. If they can prove it internally they can sell it externally.

JoeAltmaier•1mo ago
Somebody has to be the brave experimenter that tries the new thing. I'm just glad it was these folk. Since they make no tangible product and contribute nothing to society, they were perhaps the optimal choice to undergo these first catastrophic failed attempts at AI business.
pama•1mo ago
Agree on broad strokes, but slack is a useful product.
JohnTHaller•1mo ago
They didn't create Slack, they just bought it.
pama•1mo ago
Sure. However, the hiccup that salesforce faces will affect slack usage.
brianwawok•1mo ago
Salesforce the crm not slack
belter•1mo ago
Most disastrous non intuitive UI ever seen...
sznio•1mo ago
ever tried teams?
belter•1mo ago
Teams is confusing but Slack is gaslighting...
scsh•1mo ago
While someone does have to be the first to experiment I think you've implied a bit of a false dichotomy here. Experimentation can be good for sure, but it also doesn't have to involve such extremes. Sucks for the people left who now have to make up for the fact that someone's experiment didn't work out so well.
mdhb•1mo ago
I think that as an employee it’s good to have a clear failure case study to point to from a large and credible organisation that this idea your boss has to fire everyone and just LLM everything isn’t going to work the way you expect it to.

The more examples of this going badly we can get together the better.

oulipo2•1mo ago
I think the OP was being sarcastic there...
develop7•1mo ago
There's always someone that reads it and replies with a straight face.
dangoodmanUT•1mo ago
Boom, roasted.
cornholio•1mo ago
I think it was mostly a branding exercise, Salesforce wanted to signal to its customers that they are on top of this whole AI thing and there is no need to go to some unknown AI startup to "AIfy" their business. So they wanted to capitalize on FOMO / fear of being disrupted while using a bad labor market to improve profitability. They succeeded in this and made news around the world, but maybe not so many new customers.
HarHarVeryFunny•1mo ago
Makes no sense - why would Salesforce's customers care if the company is using AI or not, other than when it impacts them (the customer) such as worse customer service.

This just seems a poor decision made by C-suite folk who were neither AI-savvy enough to understand the limits of the tech, nor smart enough to run a meaningful trial to evaluate it. A failure of wishful thinking over rational evaluation.

wlesieutre•1mo ago
I figured the messaging is target more at investors than customers
fumeux_fume•1mo ago
If you consider the extent to which our economy has become financialized, then you see these decisions have little to do with providing a product for customers but rather a stock for investors.
philistine•1mo ago
The product is the press release.
6510•1mo ago
I need to talk to Jim, where is Jim?
ilamont•1mo ago
It was signaling to Wall Street and the rest of the tech industry. They want to be seen as profit focused and innovation driven.
bdangubic•1mo ago
they contribute very little except of course that without jobs their products have created 14.8647% of US population would starve to death. HN seems like a perfect place where people upvote stupid shit like some of the most successful companies in the history of mankind contributing nothing to society. bravo!! :)
JoeAltmaier•1mo ago
A bold statement. Who knew so many US citizen owed their food to an internet company! And not even Google or Amazon. Seems a reach, by maybe two or three decimal places.
Throaway198712•1mo ago
Regrets that the cost-benefit analysis didn't work out, not that they fired anyone.
bhewes•1mo ago
But have they hired anyone back?
nottorp•1mo ago
Why would they, “AI” will be much better in 6 months!
foolswisdom•1mo ago
Probably the first time I'm saying this, but this site appears heavily AI written.
nobodyandproud•1mo ago
The senior leadership are accountable here. I assume none of them held themselves to task.
justin66•1mo ago
“Mistakes were made.”
edgineer•1mo ago
I'm aware that "what does Salesforce actually do?" is a joke but I also really don't know what they do and this article didn't help. They... have conversations with customers? What does the AI do?
JohnTHaller•1mo ago
They make hideously complicated software to help businesses manage their business. You need consultants to help integrate it and to make any changes to it. The interfaces are convoluted and require learning how they work rather than having any kind of discoverability. Switching to their systems often involves a dip in customer satisfaction. Switching off of their systems is nearly impossible by design.
mr_mitm•1mo ago
Sounds like SAP
rwmj•1mo ago
We use it as basically a customer-facing bug tracker, except it's absolute garbage even compared to stuff like Jira.
sergiotapia•1mo ago
A big chunk of it is like an enterprisey, old TwentyCRM. It connects with everything, and nobody got fired for choosing salesforce. And the decision makers all play golf together.
cons0le•1mo ago
In 2025, the most profitable companies are ones where nobody knows what they do. Salesforce, Palantir, Oracle, ect.
websiteapi•1mo ago
weird - even if AI was literally omnipotent and omniscient, you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do. Unless you're of course, willing to YOLO the entire company on output you haven't actually checked yourself.

for that reason alone humans will always need to be in the loop. of course you can debate how many people you need to the above activity, but given that AI isn't omniscient, nor omnipotent I expect that number to be quite high for the foreseeable future.

one example - I've been vibe coding some stuff, and even though a pretty comprehensive set of tests are passing, I still end up reading all of the code. if I'm being honest some of the decisions the AI makes are a bit opaque to me so I end up spending a bunch of time asking it why (of course there's no real ego there, but bare with me...), re-reading the code, thinking about whether that actually makes sense. I personally prefer this activity/mode since the tests pass (which were written by the AI too), and I know anything I manually change can be tested, but it's not something I could just submit to prod right away. this is just a MVP. I can't imagine delegating if real money/customers were on the line without even more scrutiny.

w4yai•1mo ago
> even if AI was literally omnipotent and omniscient, you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do

no no no you don't get it, you would have ANOTHER AI for that

morkalork•1mo ago
You're being sarcastic but if I "LLM as judge" one more time, I might jump off a bridge.

Also, it does appear that there are companies willing to YOLO themselves off a cliff with AI

gradus_ad•1mo ago
It's not even about humans "needing" to be in the loop, but that humans "want" to be in the loop. AI is like a genius employee who has no ego and no desire to rise up the ranks, forever a peon while more willful colleagues surpass them in the hierarchy.

Until AI gets ego and will of its own (probably the end of humanity) it will simply be a tool, regardless of how intelligent and capable it is.

hnlmorg•1mo ago
Humans need to be in the loop for the same reason other humans peer review humans pull requests: we all fuck up. And AI makes just as many mistakes as humans do. They just do so significantly quicker.
undersuit•1mo ago
Yes, "Mecha-hitler" has no aspirations. /s
only-one1701•1mo ago
This is the opposite of both what the article is saying, and reality
serf•1mo ago
>weird - even if AI was literally omnipotent and omniscient, you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do.

one would hope that one ability of an 'omniscient and omnipotent' AI would be greater understanding.

When speaking of the divine (the only typical example of the omniscient and omnipotent that comes to mind) we never consider what happens when God (or whoever) misunderstands our intent -- we just rely on the fact that an All-Being type thing would just know.

I think the understanding of minute intent is one such trait an omniscient and omnipotent system must have.

p.s. what a bar raise -- we used to just be happy with AGI!

danenania•1mo ago
That’s because gods are a mythical/supernatural invention. No technology can ever really be omniscient or omnipotent. It will always have limitations.

In reality, even an ASI won’t know your intent unless you communicate it clearly and unambiguously.

consumer451•1mo ago
> In reality, even an ASI won’t know your intent unless you communicate it clearly and unambiguously.

I recently came to this realization as well, and it now seems so obvious. I feel dumb for not realizing it sooner. Is there any good writing or podcast on this topic?

bdangubic•1mo ago
The communication I get from customers is seldom clear and never unambiguous but I’ve managed since the 90’s
array_key_first•1mo ago
Right, but you have to do a lot of work, and really most of your work is in this area. Less on the actual building stuff.

Figuring out what to build is 80% of the work, building it is maybe 20%. The 20% has never been the bottleneck. We make a lot of software, and most of it is not optimal and requires years if not decades of tweaking to meet the true requirements.

krapp•1mo ago
Not really a bar raise - many people have assumed that "AGI" would mean essentially omnipotent/omniscient AI since the concept of the technological singularity came into being. Read Kurzweil or Rudy Rucker, there's a reason this sort of thing used to be called the "rapture for nerds."

If anything I've noticed the bar being lowered by the pro-AI set, except for humans, because the prevailing belief is that LLMs must already be AGI but any limitations are dismissed as also being human limitations, and therefore evidence that LLMs are already human equivalent in any way that matters.

And instead of the singularity we have Roko's Basilisk.

sweetjuly•1mo ago
Genies, maybe? They are omnipotent and (generally) sufficiently aware of your desires that they shouldn't actually get "confused". Genies are tricksters that will do their absolute best to fulfill the letter of your wish but not the meaning.
Mountain_Skies•1mo ago
Move fast and break things. When a black box can be blamed, why care about quality? What we need is EXTREMELY strict liability on harms done by AIs and other black box processes. If a company adopts a black box, that should be considered reckless behavior until proven otherwise. Taking humans out of the loop is a conscious decision they make therefore they should be fully responsible for any mistakes or harms that result.
callc•1mo ago
Shhhhh that’s a primary unspoken feature - lack of responsibility
65•1mo ago
I've always found it much quicker to just... do the work myself. AI slows me down more than anything.
websiteapi•1mo ago
fair. I used to think that too, but I find at least for golang, the sota models write tests way faster than I would be able to. tdd is actually really possible with ai imo. except of course you get the scaffolding implementation (I haven't figured out a way to get models to write tests in a way that ensures the tests actually do something useful without an implementation).
bediger4000•1mo ago
Your final sentence is interesting. I'm not a strict doctrine adherent, but in TDD, don't you write some minimal test, then implement the system to pass the test?
websiteapi•1mo ago
yes, but I find it hard to constrain it to a minimal implementation. what usually happens is it writes some tests, then an implementation, and then according to the thinking, makes some modification. it works with a relatively precise prompt, but starts to go a bit off the rails when you say things in broad terms ("write tests to ensure concurrency works, and the implementation to ensure said tests are correct")
nick486•1mo ago
>you would still be bottlenecked on human's ability to actually evaluate and verify what it is doing and reconciling that with what you wanted it to do.

this sort of assumes that most humans actually know what they want to do.

It is very untrue in my experience.

Its like most complaints I hear about AI art. yes, it is generic and bland. just like 90% of what human artists produce.

veunes•1mo ago
The problem goes deeper: verification is harder than generation. When writing an answer yourself, you build the logic chain from scratch. When verifying AI, you have to deconstruct its logic, cross-reference facts, spot hidden hallucinations, and only then approve. For complex cases (which are exactly what the humans were left with), the time for quality verification approaches the time to write from scratch. If the time becomes roughly equal, the AI stops being an accelerator and becomes just a source of noise that yields no productivity gains
belter•1mo ago
Executive compensation is justified by "...enormous impact leadership decisions have on company outcomes..." yet when those decisions blow up spectacularly, the accountability somehow evaporates.

If your pay is 400 times average employee salary because of your unique strategic vision, surely firing 4000 people based on faulty assumptions should come with proportional consequences?

Or does the high risk, high reward, philosophy only apply to the reward part?

yoyohello13•1mo ago
We all know the answer. There is no actual defense of inflated CEO salaries. It’s just the people in power maintaining their power and always has been.
nobodyandproud•1mo ago
Some real leadership in contrast: https://www.wsj.com/business/fibrebond-eaton-bonus-walker-30...
sergiotapia•1mo ago
what is the source for this? seems like a random blog?
KaiserPro•1mo ago
Yeah I can't see a source for the internal admissions of regret.

If we take out the AI part of this and treat it like any other project, if what they admit is true, it represents a massive failure of judgement and implementation.

I can't see anyone admitting that in public, as it would probably end their career, or should do at least. Especially if a company is a "meritocracy"

frm88•1mo ago
A couple of other sources:

https://m.economictimes.com/news/new-updates/ai-bubble-burst...

https://opentools.ai/news/salesforce-steps-back-from-ai-exec...

saos•1mo ago
Salesforce is B2B and a complex software. I wouldn’t expected them to layoff that much support. Surprising. They should be empowering their support staff with AI tools to improve customer experiences.

Though I’m a bit surprised they have that much support staff.

throwaway613745•1mo ago
Customer experience is secondary to making the C-suite more money.
gortok•1mo ago
What is this site? maarthandam.com? Is it a blog? An AI generated “newspaper”? An internet Newspaper? The menu doesn’t work on mobile, no articles appear to have a by-line, and there’s no link to outside sources to indicate the provenance of these quotes.
nextworddev•1mo ago
one shotted vibe coded blog
narmiouh•1mo ago
Is it just me or anyone else see that this article has no real references to its claims and the articles look like AI slop.
alexanderchr•1mo ago
Yes this reads like vacuous AI slop and and the **randomly bolded** text everywhere is a **dead giveaway**. At this point it's becoming a stronger signal than em-dashes.
herodotus•1mo ago
It is impossible to verify anything in this article. For example "In recent internal discussions and public remarks". Where are these public remarks? How did this author get access to internal discussions? I rate this article as clickbait nonsense.
port11•1mo ago
Seems based off CNBC's more informative article: https://www.cnbc.com/2025/09/02/salesforce-ceo-confirms-4000...

It does seem like Salesforce relies on Agentforce and therefore doesn't need as much support stuff. But the pressure was also to “reduce heads”, which is a bit of a tone-deaf way to describe firing thousands of people.

nextworddev•1mo ago
This is a misread of Benioff's intent behind his comment lol.

Salesforce has a vested interest in maintaing its seat based licenses, so it's not in favor of mass layoffs.

Internally Salesforce is pushing AgentForce full stop

softwaredoug•1mo ago
For an AI agent to do a good job at customer support, you would need to

1. literally document everything in the product and keep documentation up to date (could be partially automated?)

2. Build good enough search to find those things

3. Be able to troubleshoot / reason / abstract beyond those facts

4. Handle customer information that goes against the assumptions in the core set of facts (ie customers find bugs or don’t understand fundamental concepts about computers)

5. Be prepared to restart the entire conversation when the customer gets frustrated with 1-4 (this is very annoying)

veunes•1mo ago
Point 1 (document everything) is the utopia that killed the project. In any complex system, documentation is a lossy compression of reality. The actual truth about how to fix bugs doesn't live in Confluence; it lives in senior heads, Slack chats, and intuition, and AI has no access to this layer of tribal knowledge
pjc50•1mo ago
> declining service quality, higher complaint volumes, and internal firefighting

LLMs are a great technology for making up plausible looking text. When correctness matters, and you don't have a second system that can reliably check it, the output turns out to be unreliable.

When you're dealing with customer support, everyone involved has already been failed by the regular system. So they're an exception, and they're unhappy. So you really don't want to inflict a second mistake on them.

ben_w•1mo ago
All true. A counter, and a counter-counter:

The counter: the existing system of checks with (presumably) humans was not good enough. For the last 15 months or so, I have been dealing with E.ON claiming one thing and doing another, and had to escalate it to the Ombudsman. I don't think E.ON were using an AI to make these mistakes, I think they just couldn't get customer support people to cope with the idea "the address you have been posting letters to, that address isn't simply wrong, it does not exist". An LLM would have done better, except for what I'm going to say in the counter-counter.

The counter-counter, is that LLMs are only an extra layer of Swiss-cheese: the mistakes they make may be different to human mistakes or may overlap, but they're still definitely present. Specifically, I expect that an LLM would have made two mistakes in my case, one of which is the same mistake the actual humans made (saying they'd fixed everything repeatedly when they had not done so, see meme about LLMs playing the role of HAL in 2001 failing to open the pod bay door) and the other would have been a mistake in my favour (the Ombudsman decided less than I asked for, an LLM would likely have agreed with me more than it should have).

mbfg•1mo ago
Maybe where AI needs to take over is at the CEO level.
binary132•1mo ago
[flagged]
cons0le•1mo ago
I wanted to express similar sentiment, but I didn't understand how I would without leaving a rule breaking comment.

It's my sincerely held opinion that we're fostering a culture here that ignores the "human impact" of the technology that we're rushing to adopt.

I'm well aware that many members of this community have achieved "success" through software. This includes the rapid adoption of new computing paradigms, new technology stacks, new frameworks, etc.

I am fortunate to be employed. But around me, when I step out of my house, it's painful. People are hurting. They're unemployed. They're depressed. And the younger generation is even worse. They can't even afford to dream.

I live in a corporate world of forced smiles and fake enthusiasm. I would hate for that same culture to take root here. We need to be able to express significant doubt, or even cynicism against AI, without fear of backlash.

tomhow•1mo ago
Hacker News can only be good if enough people make the effort to make it good. There is always going to be a mix of things that push the standard up and things that drag the standard down. That's how averages and distributions work.

Unfortunately what we see from you is a pattern of low-effort comments, some of which don't even bother with basic sentence formation features like capitalization at the start and a period at the end. That's a high-signal hallmark of low-effort comments. Looking down your comment feed we see many single-line comments that are low on substance and high in snark.

The guidelines make it clear we're trying for something better here. They ask us to be kind, and to avoid snark and swipes. They ask us to converse curiously. They ask us not to fulminate, and not to sneer, including at the rest of the community.

It's fine to want HN to be better. As moderators we certainly do; that's why we do this job. But it requires us all to actually make the effort to be better in our own conduct. When you see comments from other users that aren't up to standard, we need you to use the tools that have always been here, like downvoting, flagging and emailing us (hn@ycombinator.com) so we can take action.

It isn't other people's job to make good enough for you whilst you conduct yourself in this way. If you really want HN to be better, please do your part to raise the standards rather than dragging them down further.

binary132•1mo ago
If I were you I would be more concerned with the fact that you have allowed what was once a well-respected forum to become little more than a spam platform for AI shills. You can silence me, but I am not wrong and I’m not the only one who has noticed this. It’s very obvious.

You should understand that one way people improve the standards of a commons is by imposing social controls on those who violate norms which create a healthy society, such as by shilling. That is normal behavior on every forum I’ve ever seen.

When you allow there to be 100x more of this mindless slop than of anything else, the most any individual can do to resist the tide is to contribute to the voices trying to make antisocial behavior come with a cost.

It works, and because it works, people will continue to do it until you figure out how to keep a clean commons.

PS. I suppose you would probably say the same thing to Rob Pike (if he were a user of your site which he doubtless is not).

https://skyview.social/?url=https%3A%2F%2Fbsky.app%2Fprofile...

tomhow•1mo ago
Please don't sermonize to distract from your own record of disrespect towards HN and its guidelines.

The people you claim have “allowed” this have maintained HN for many years – 13 in dang's case, the majority of its history. The primary reason this is a place where people want to participate is because of the guidelines that have been developed and refined since HN's inception, and that we spend hours each day upholding. People have been heralding the decline of HN since it was barely more than a few months old [1], yet it continues to grow as a place where people want to showcase interesting work, which is what we most care about.

Generated comments and posts are banned, and we state this frequently. I spend time each day evaluating submissions and Show HNs to determine whether they're human-authored or AI-generated. We welcome people to flag generated content and email us so we can ban accounts with a pattern of posting it. Yes, it takes time for these mechanisms to kick in. HN is a public, anonymous site. Anyone can post anything, and the immune system takes time to do its work. That's always been the case.

There is a cohort of community members who have demonstrated a commitment to making HN better over several years through: (a) submitting good articles, (b) posting thoughtful comments, (c) observing the guidelines, (d) flagging bad submissions and comments, and (e) emailing us to point out guidelines breaches and to discuss the healthy functioning of the site. These are the people we listen to when they express concerns about HN's health, because they've established a track record of genuine contribution and care over several years.

From you, we see two comments prior to 2023, and little or none of the above kinds of actions. Instead: ragey fulmination, hyperbole, and ascribing views to us without basis. And now you hold yourself up as HN's heroic defender, having never undertaken the earnest, unglamorous, unseen work that other community members do to make this the place you claim needs you to defend.

Please, if you really want HN to be better, you are most welcome to start doing the things that other community members quietly do every day to help make it better.

[1] https://news.ycombinator.com/item?id=373801

kevin_thibedeau•1mo ago
Competent management would have implemented a trial run to evaluate the feasibility of the plan. These sociopaths ensured their own failure by lunging for the prize without realizing they stepped off a cliff.
dangoodmanUT•1mo ago
> “We assumed the technology was further along than it actually was,” one executive said privately, reflecting a growing recognition that AI performance in controlled demonstrations did not translate cleanly into real-world customer environments

stop. reading. evals.

Mountain_Skies•1mo ago
And when they can't undo their mistake will they accept the consequences, or will they cry to the government that there are no workers available to do the jobs so national policy must be modified to give Salesforce an even larger firehose of candidates to ignore? Companies complain endlessly that there isn't a huge stable of unicorns for them pick and choose from but those 4000 experienced staff were known good workers and they dumped them anyway to chase fantasies. Salesforce will demand the government fix their mistake for them. The larger the company, the more they expect to never have to pay for their mistakes.
TheGRS•1mo ago
I bounced out of this article pretty quick after seeing it was generated by AI.
xnx•1mo ago
Public company logic:

Firing people = smart cost cutting

Hiring people = strong vote of confidence in continued growth

anshumankmr•1mo ago
Ahem did you mean "rightsizing" and "rapid growth"?
matrix12•1mo ago
Sauce https://timesofindia.indiatimes.com/technology/tech-news/aft...
delduca•1mo ago
This site have zero reputation.
kevinwang•1mo ago
Thanks. Link should be changed to this.

Edit: oh wait, this article isn't the source either. It references an article by "The Information", which I assume is https://www.theinformation.com/articles/salesforce-executive... There's also this follow-up: https://www.theinformation.com/articles/story-salesforces-de...

It's paywalled, so I can't verify.

smartbit•1mo ago
Dec 21 article https://archive.is/oi302 and the Dec 23 follow-up https://archive.is/7RXKb
talos•1mo ago
The Information article can be found on archive.is.

Both the OP article and this Times of India article appear to be AI-generated summaries of the original article.

Craziness!

Robdel12•1mo ago
I’m surprised, hacker news is not questioning this in the slightest?

Is anyone really buying they laid off 4k people _because_ they really thought they’d replace them with an LLM agent? The article is suspect at best and this doesn’t even in the slightest align with my experience with LLMs at work (it’s created more work for me).

The layoff always smelled like it was because of the economy.

davidgerard•1mo ago
The article also reads like it was written with a chatbot.
computerdork•1mo ago
Hmm, actually lines up for me at least. It was a pretty big news item a few months ago when Salesforce did this drastic reduction in their Customer Service department, and Marc Benioff raved about how great AI was (you might have just missed it):

  https://www.ktvu.com/news/salesforce-ai-layoffs-marc-benioff
At the time, it was such a big deal to a lot of us because it was a signal what could eventually happen to the rest of us white collar workers.

Of course, it could still happen, as maybe AI systems just need another few years to mature before trying to fully replace jobs like this...

... although, one thing I agree with you is that there isn't much info online on these quotes from Salesforce executives, so could be made up.

DougN7•1mo ago
I’m beginning to doubt very much that will happen. AI/LLMs are already based on 99% of all accessible text in the world (I made that stat up, but I think I’m not far off). Where will the additional intelligence come from that SalesForce needs for the long tail, the nuance, and the tough cases? AI is good at what it’s already good at - I predict we won’t see another order of magnitude improvement with all the current approaches.
computerdork•1mo ago
Hmm, am no LLM expert, but agree with you that the models themselves for the individual subject domains seem like they're starting to reach their peaks (Writing, solving math, coding, music gen...) and the improvements are becoming a lot less dramatic than couple of years ago.

But, feel like combining LLM's with other AI techniques seems like it could do so much more...

... As mentioned, am no expert, but seems like one of the next major focuses on LLM's is on verification of its answers, and adding to this, giving LLM's a sense for when its result are right or wrong. Yeah, feel like the ability for an LLM to introspect itself so it can gain an understanding of how it got its answer might be of help if knowing if its answer is right (think Anthropic has been working on this for awhile now), as well as scoring the reliability of the information sources.

And, they could also mix in a formal verification step, using some form of proof to prove that its results are right (for those answers that lend themselves to formal verification).

Am sure all this is all currently being tried. So any AI experts out there, feel free to correct me. Thanks!

veunes•1mo ago
The idea of formal verification works great for code or math where clear rules exist, but in customer support, there is no formal specification. You can't write a unit test for empathy or for "did we correctly understand that the customer actually wants a refund even though they're asking about settings." This is the Neuro-symbolic AI problem: to verify an LLM answer, you need a rigid ontology of the world (Knowledge Graph or rules), but the real world of customer interaction is chaos that cannot be fully formalized
computerdork•1mo ago
Ah yes, and actually, Agreed (as mentioned, formal verification is only possible for "those answers that lend themselves to it").

Interesting that you mentioned Knowledge Graphs, haven't heard about these in a long time. Just looked up "Commonsense knowledge" page on wikipedia and seems like they're still being added to. Would you happen to know if they're useful yet and can do any real work? or are good enough to integrate with LLM's?

EagnaIonat•1mo ago
I checked and this appears to be the source.

https://timesofindia.indiatimes.com/technology/tech-news/aft...

It isn't regret, they are trying to sell their Agentforce product.

stogot•1mo ago
Dogfooding hype!
rsynnott•1mo ago
I mean, this might be a case where it’s actually sort of credible. It was a _very_ deep cut (basically half the workforce), the salesforce guy is a particularly over-the-top ai true believer, and if they are now reversing course and re-hiring, well, nothing has happened to the economy in the last couple months that would suggest that, if it was related to the economy. If anything, things are looking even more uncertain/ominous.
belter•1mo ago
Its a report from The Information:

"Why Our Story on Salesforce’s Declining Trust in LLMs Hit a Nerve" - https://www.theinformation.com/articles/story-salesforces-de...

https://archive.is/7RXKb

arnonejoe•1mo ago
What swe would want to work there after reading this.
simonw•1mo ago
maarthandam.com is a weird website. Recent posts:

    Salesforce regrets firing 4000 experienced staff and replacing them with AI
    December 25, 2025
    New Chennai Café Showcases Professional Excellence of Visually Impaired Chefs
    December 22, 2025
    Employee Who Worked 80 Hour Weeks Files Lawsuit Alleging Termination After Approved Medical Leave
    December 21, 2025
    UPS Sued for Running Holiday Business By Robbing Workers of Wages
    December 18, 2025
    This Poor Man’s Food is A Nutritional Powerhouse that is Often Ignored in Tamil Nadu
    October 5, 2025
    Netizens Mourn as Trump Was Found Alive, Promising Tariffs Instead
    August 31, 2025
Looks like a clickbait farm of some sort?
coliveira•1mo ago
The most stupid narrative ever. If AI is so good for productivity, why don't you use it to make your 4000 workers produce even more than other companies? Why you need to fire them, so now you have hands tied to your back, and go back to produce the same amount of software? It is completely obvious that the goal is to fire workers, not to get AI stuff done.
thunky•1mo ago
> If AI is so good for productivity, why don't you use it to make your 4000 workers produce even more than other companies?

Because they don't have 4000+ workers worth of work to do?

coliveira•1mo ago
If they cannot think of new features to improve the software, I'm pretty sure their competition can.
thunky•1mo ago
There isn't an endless supply of features waiting to be built and money waiting at the door to pay for them. Do we really think that the only thing keeping them from being the biggest company on earth is their shortage of developer talent?
coliveira•1mo ago
So you really believe that we arrived to the end of software? It's obvious that a competitor could create a better software (if that was possible with AI).
thunky•1mo ago
> So you really believe that we arrived to the end of software?

No that's not what I'm saying. I'm saying that demand (for a product or service) is what drives the amount of labor that is performed, not the other way around.

If a company has maxed out the amount of widgets the can sell in their market and adding new features will not change that, then adding more labor makes no sense.

It follows that making their existing labor more productive leads to layoffs.

throw123ha71•1mo ago
So Salesforce is ahead of Microsoft in wisdom. Nadella is focusing on his grand visions again and is telling dissenters to leave:

https://timesofindia.indiatimes.com/technology/tech-news/mic...

He also uses cultural revolution tactics and uses the young ones against the old. I imagine AI house of cards will collapse soon and he'll be remembered as the person who enshittified Windows after the board fires him.

stego-tech•1mo ago
I’d love my old job back at this point. I genuinely miss working with such talented colleagues.
skybrian•1mo ago
This reads like a polished newspaper article, but I've never heard of this website before and there are no links.

A search found an similar article from Times of India which credits The Information, there's no good way for non-subscribers to search it.

veunes•1mo ago
They shouldn't have tried to force LLMs into doing something current models aren't designed for: semantic understanding of "unknown unknowns". Tier-2/3 support isn't just about picking an answer from a knowledge base; it requires deduction, empathy, and finding solutions that don't exist yet. Models excel at generating relevant text for FAQs, but the moment a task requires understanding novel context, correlating non-obvious facts, or recognizing subtle emotional cues from a customer, current LLM architectures fail ruthlessly