Though like non-GAAP earnings & adjusted EBITDA, very few care. Those that do are often old, technical, conservative & silent type of investors instead of podcasters or CNBC guests. RIP Charlie M.
Alternatively though if the market is bad and there not launching as many new products or appealing to as many new customers, customer support may be a cost center you’d force to have “AI efficiencies”
in addition SalesForce grew in employment size in 2025 AFIK and 4000 jobs are for them only around ~~5%, which means it's to small to be a meaningful metric if you don't fully trust what their press department does (and you shouldn't)
still I see people using modern AI for small productivity boosts all over the place including private live (and often with a wastely underestimate risk assessment) so in the best case it's only good enough to let people process more of the backlog (which otherwise would be discarded due to time pressure but isn't worthless) and in the worst case will lead to idk. 1/3 of people in many areas losing their job. But that is _without_ major breakthrough in AI, just based one better applying what AI already can do now :/ (and is excluding mostly physical jobs, but it's worse for some other jobs, like low skill graphic design positions)
And as software developers, it would be silly if we didn't think that businesses would love to find a way to replace us, as the software we have created did for other roles for the past 60 years.
Companies like IBM & Klarna have made news for reducing positions like these & then re-hiring them.
AI, like most tech, will increase productivity & reduce headcount but it's not there yet. Remember, the days are long & the years are short.
Their function is around reconciling utilization and bills from multiple related suppliers with different internal stakeholders. They do a bunch of analysis and work with the internal stakeholders to optimize or migrate spend. It is high ROI for us, and the issue is both finding people with the right analytical and presentation skills and managing the toil of the “heavy” work.
Basically, we’re able to train interns to do 80% of the processing work with LLM tooling. So we’re doing to promote two of the existing staff and replace 2/6 vacancies with entry level new grads, and use the unit to recruit talent and cycle them through.
In terms of order of magnitude, we’ll save about $500k in payroll, spend $50k in services, and get same or better outcomes.
Another example is we gave an L1 service desk manager Gemini and made him watch a YouTube video about statistics. He’s using it to analyze call statistics and understand how his business works without alot of math knowledge. For example, he looked at the times where the desk was at 95th percentile call volume and identified a few ways to time shift certain things to avoid the need for more agents or reduce overall wait times. All stuff that would require expensive talent and some sort of data analysis software… which frankly probably wouldn’t have been purchased.
Thats the real AI story. Stupid business people are just firing people. The real magic is using the tools to make smart people smarter. If you work for a big company, giving Gemini or ChatGPT to motivated contracts and procurement teams would literally print money for you due to the stuff your folks are missing.
Do you use tech to grow your business or increase dividends?
Also reducing staff via attrition shows far better management skills than layoffs which imo says more about the CEO & upper management.
This is to say that we know from looking at outcomes over the long term that the kinds of concrete gains you're describing are offset by subtler kinds of losses which most likely you would struggle to describe as decimal numbers but which are equally real in their impact on your business.
Except it seems like the opposite is happening. CS grads have high unemployment. Companies laying off staff.
The rhetoric doesn't seem to add up to the reality.
I worked on both - my skillset went from coding pretty bar charts in SVG + Javascript to configuring Grafana, Dockerfiles and Terraform templates.
There's very little overlap between the two, other than general geekiness, but thanks I'm still doing OK.
When he retired a few years ago, most of that was gone. The attorneys and paralegals were still required, there was a single receptionist for the whole office (who also did accounting) instead of about one for each attorney, and they'd added an IT person... but between Outlook and Microsoft Word and LexisNexis and the fileserver, all of those jobs working with paper were basically gone. They managed their own schedules (in digital Outlook calendars, of course), answered their own (cellular) phones, searched for documents with the computers, digitally typeset their own documents, and so on.
I'm an engineer working in industrial automation, and see the same thing: the expensive part of the cell isn't the $250k CNC or the $50k 6-axis robots or the $1M custom integration, those can be amortized and depreciated over a couple years, it's the ongoing costs of salaries and benefits for the dozen humans who are working in that zone. If you can build a bowl screw feeder and torque driver so that instead of operating an impact driver to put each individual screw in each individual part, you simply dump a box of screws in the hopper once an hour, and do that for most of the tasks... you can turn a 12-person work area into a machine that a single person can start, tune, load, unload, and clean.
The same sort of thing is going to happen - in our lifetimes - to all kinds of jobs.
to be fair this positions never made that much sense as they tend to cause more trouble then they are helping on the long run, but they exist anyway
and companies should know better then throwing away "junior, not yet skilled, but learning" positions (but then many small startups also are not used to/have the resources to teach juniors, which is a huge issue in the industry IMHO)
but I imagine for many of the huge "we mainly hire from universities"/FANG companies it will turn into a "we need only senior engineers and hire juniors only to grow our own senior engineers", this means the moment to you growth takes too long/stagnates by whatever _arbitrary_ metric you get kicked out fast. And like with their hiring process they have the resources, scale, and number of people who want to work for them to be able to really use some arbitrary imperfect effective discriminatory metrics.
Another aspect is that a lot of the day to day work of software engineering is really dump simple churn, and AI has the potential to massively cut down the time a developer needs to do that, so less developers needed especially in mid to low skill positions.
Now the only luck devs have is that there is basically always more work which was cut due to time pressure but often isn't even supper low priority, so getting things done faster might luckily not map one to one to less jobs being available.
- OCR eat a good chunk of data entry jobs,
- Automated translation eat a number of translation jobs,
- LLM have eaten quite a few tier I support roles.
I don't have numbers tho, maybe people are still doing data entry or hiring translators on mechanical turk.
Initially machine translation was way worse (by professional standards) than people assumed, essentially useless, you had to rewrite everything.
As time went on, and translation got better, the workflow shifted from doing it yourself to doing a machine pass, and rewriting it to be good enough. (Machine translation today is still just 'okay', not professional quality)
On the initially set rates 15 years ago you could eke out a decent-ish salary (good even if you worked lots of hours and were fast). Today if you tried to do the work by hand, you'd starve to death.
The question is no longer whether AI will put people out of work, but how many and how quickly.
- Translation. See: Gizmodo firing its Spanish translation team and switching exclusively to LLMs.
- Editors. See Microsoft replacing their news editors at MSN with LLMs.
- Customer service. Various examples around the world.
- Article graphics for publications. See: The San Francisco Standard (which used it for various articles for a period), Bleepingcomputer.com, Hackaday (selectively, before some pushback).
- Voice acting. The Finals game used synthetic voices for the announcer voices.
So jobs being killed by AI are basically being killed same way that office number crunching technology killed administrative assistant positions and put those tasks onto other people.
Take for example a purchasing department for a big company. Some project needs widgets. Someone crosses the specs against what their suppliers make. They take the result of that and makes a judgement call. AI replaces that initial step so a team of N can now do the work that formerly took a team of N + Y. Bespoke software could have replaced that step too but it would have been more expensive, less flexible, etc, etc. since there's all this work required to make human facing content into machine parsable content, including the user's input and the juice simply wasn't worth the squeeze. With AI doing all that drudgery on an as-needed basis the juice now is worth the squeeze in some applications.
This sounds so weird to me, and I feel I am missing something.
You might as well ask why google built an ad company or email or video, or a browser or a phone OS etc, when they should have spent more money on their core search engine.
Hyperstition is a real thing... if, that is, you're William Gibson.
The series was really good. Too bad they cancelled it for being too expensive.
The book, however, is excellent- definitely recommend.
But reality is that there will be new high skilled jobs from AI thanks to Jevons' Paradox, the more companies using AI will lead to a huge demand for highly skilled people who can use AI in more ways than we are today.
Not so much about being replaced, but there will be new jobs for people to do.
I guess for those people being 'replaced' it is a 'skill issue'
For me it's funny that the first time most programmers ever think about the ethics of automating away jobs is when they themselves become automated.
I contend it was when Dodge won the court case deciding that shareholders were more important than employees. It’s been a slow burn ever since.
In fact I fail to see any connection between those two facts other than that both are decisions to allow or not allow something to happen by OpenAI.
Imagine if ChatGPT gave "do a luigi" as a solution to walmart tracking your face, gait, device fingerprints, location, and payment details, then offering that data to local police forces for the grand panopticon to use for parallel reconstruction.
It would be unimaginable. That's because the only way for someone to be in the position to determine what is censored in the chat window, would be for them to be completely on the side of the data panopticon.
There is no world where technology can empower the average user more than those who came in with means.
It is funny, in worst way possible of course, that even our chairs are not as stable as we thought they are. Even automation can be somehow automated away.
Remember all those posts stating how software engineering is harder, more unique, somehow more special than other engineering, or generally types of jobs? Seems like its time for some re-evaluation of that big ego statements... but maybe its just me.
0. The people who got into it just as a job
1. The people who thought they could do it as art
And #1 is getting thrashed and thrown out the window by the advent of AI coding tools, and the revelation companies didn’t give a darn about their art. Same with AI art tools and real artists. It even begs the question if programming should ever have been viewed as an art form.
On that note, programmers collectively have never minded writing code that oppresses other people. Whether with constant distractions in Windows 11, building unnecessarily deadly weapons at Northrop Grumman, or automating the livelihoods of millions of “inferior” jobs. That was even a trend, “disrupting” traditional industries (with no regard to what happens to those employed in said traditional industry). Nice to see the shoe just a little on the other foot.
For many of you here, keep in mind your big salary, came from disrupting and destroying other people’s salaries. Sleep well tonight and don’t complain when it’s your turn.
The thing is, there's no innovation in the "track everything that breaths and sell the data to advertisers and cops" market.
They might get better at the data collection and introspection, but we as a society have gotten nothing but streamlined spyware and mental illness from these markets.
I'm less talking about automation and more about the underpinnings of the automation and the consequences in greater society. Not just the effects it has on poor ole software engineers.
It is quite ironic to see the automation hit engineers, who in the past generally did not care about the consequences of their work, particularly in data spaces. We have all collectively found ourselves in a local minima of optimization, where the most profitable thing we can do is collect as much data on people as possible and continually trade it back and forth between parties who have proven they have no business holding said data.
They are pursuing profits. Their ethical focus is essentially a form of theater.
Automation and technology has been replacing jobs for well over a century, almost always to better outcomes for society. If it were an ethical issue, then it would be unethical not to do it.
In any case, which jobs have been replaced by LLMs? Most of the actual ones I know were BS jobs to begin with - jobs I wish had not existed to begin with. The rest of the ones are where CEOs are simply using AI as an excuse to execute layoffs (i.e. the work isn't actually being done by an LLM).
This time the purported capabilities of "AI" are a direct attack on thinking. Outsourcing thinking is creepy and turns humans into biorobots. It is different from robotic welding in an assembly line.
Even if new bullshit jobs are created, the work will just be that of a human photocopier.
[All this is written under the assumption that "AI" works, which it does not but which is the premise assumed by the persons quoted in the Register article.]
I also fail to see how LLMs can turn humans into "biorobots". You can still do all the things you could do before LLMs came along. The economic value of those things just decreased enourmously.
All the good devs that I know aren't worried about losing their jobs, they are happy there is a shortcut through boilerplate and documentation. They are also equally unhappy about having to talk management, who know very little about the world of dev, off the ledge as they are getting ready to jump off with their AI wings that will fail.
Finally, the original point was about censorship and controlling of information, not automating jobs.
We don't have to do anything. People always listen to this propaganda from the wealthy and think the latest gadgets are inevitable.
We can go on a general strike until copyright is restored and the "AI" companies go bankrupt. Journalists can write sharper and sharper articles. You can refuse to use "AI". If mandated in a job, demonstrate that it slows you down (probably true anyway).
You can demand an investigation into Suchir Balai's demise (actually Elon Musk recently endorsed a tweet demanding just that; you can, too).
You can boycott robotaxis. You can stop watching YouTube channels that do product placement of humanoid robots. You can do a lot of things.
AI CEOs are talking like they are oracles but they just need to please stakeholders.
Because instant misinformation is even faster and satisfies their biases.
https://www.ft.com/content/9c19d26f-57b3-4754-ac20-eeb627e87...
https://www.wsj.com/tech/ai/china-has-a-different-vision-for...
China restricts "AI" across the country to prevent cheating:
https://nypost.com/2025/08/19/world-news/china-restricts-ai-...
Why would we do that? Because we can! Why would we let some rich brats who have never created something and just steal human knowledge get even richer and control and surveil us all?
This sounds like naive wishful thinking. Add “theoretically” after every “can” for real life.
Instacart IPO'd. OpenAI hit $300B valuation. Different companies, different industries—yet look closer and you'll find the same names signing both checks.
Moreover, website ovners even pay for capcha. It should be other way around - people participated in training the neural nets should share profit and owhership of the networks, at the very least.
If we are lucky, AI provides a huge accelerant to being able to insource manufacturing with advanced automation. This is an energy saving play to reduce the total distance that objects are shipped which wastes a lot of energy especially as the objects are of lower value.
amelius•1h ago
apwell23•1h ago
I feel like we need a new word for money going to datacenters instead of paychecks. 'AI taking jobs' implies AI is doing the work , which is not the case.
milkshakes•1h ago
apwell23•1h ago
trollbridge•1h ago
muskyFelon•1h ago
Let's reduce headcount and spin it as AI disruption! That way we dont have to acknowledge we overhired during covid AND our stock price will go to the moon as they say.
apwell23•57m ago
Crazy how these CEO are so brazenly and openly committing fraud. Market and investors are playing along because stock price is going up. Board doesn't give a fuck.
USA is one giant casino right now.
smt88•1h ago
The most valuable thing AI can do right now is write code, and it couldn't do that without thousands of StackOverflow volunteers.
mlnj•1h ago
nickthegreek•1h ago
toader•37m ago
tjr•11m ago
If you have an LLM that was trained on (say) everything on the internet except for programming, and then trained it on the Python.org online documentation, would that be enough for it to start programming Python? I've not tried, but I get the impression that it needs to see lots and lots of examples first.
SwtCyber•8m ago