Get prepared. Something is coming *soon*
And how any even slightly skepctical commend gets downvoted to hell. One may start thinking there are bots promoting the narrative.
Or maybe you're choosing to perceive bots when actually a lot of people disagree with you?
On the other side the doom posters tend to be awfully mediocre professionals (or again, conmen leveraging FOMO). Skeptics like in the article tend to be dismissed. I'm also a skeptic, and someone who you would define as a 10x i think, except a few years ago i would have just been, you know, good at my job?
Please let me know when i'm going to be automated so i can start becoming good at something else. The future may not be bright for a number of reason, but i still have not submitted to doom.
infact, i go and implement dumb AI models in many companies and executives immediately show "how many people they can fire with this advancement".
So while there may be lots of consultants losing their jobs, that’s not because AI tools do the work better. It’s because management thinks investors will accept the story that AI tools will do the work better and save money. Management, and investors, don’t know, can’t judge, and honestly don’t actually care if it’s better or worse. And they run things so poorly it would be impossible to tell anyway.
But now, you can hire 1 customer service person, who could then use AI agents to provide the top quality customer service. Previously, you needed to hire 5 people, which wasn't worth it.
So you went from no customer service employee to 1.
I suspect that this is what will happen. Many companies will hire their first customer service person or more. Many big companies will layoff most of their customer service people. The net effect might actually increase total customer service employment.
I suspect that job openings for customer service employees will actually be higher than now but companies won't be able to find enough AI-skilled people to fill the job. We're going to read about how there are more job openings than ever but companies can't find the AI skillset they need. This is why I think people who adopt AI now, learn it, understand it, get good at it, will be in high demand.
People thought AI being better than a human at reading medical images would put radiologists out of a job. But instead, radiologists had more demand than ever because it made getting a scan more affordable, more accurate which led to more customer demand.
Same can happen for customer service. AI makes customer service cheaper, better, faster. More companies offer good customer service in order to stay competitive. More customers demand customer service because it's better now and they expect it since all companies big or small can afford quality customer service.
If you remove the human from the loop in customer service, you won't gain a thing.
* Customer wants the human touch
* The company's systems were broken and the customer wouldnt have called at all if they could quickly and easily do what they wanted online.
* Customers are routinely furious and want to complain and/or understand and the company wants to brush them off.
AI doesnt help the first two, it only helps with deflection (what they call the last one).
Disclaimer: I'm an AI compute investor.
LLM performance is already plateauing; models will get more efficient. Good-enough models will be deployed on chips, the same way H.264 is a good-enough video codec but used ubiquitously.
The anecdote in there is about complex B2B enterprise software. That's not the majority of customer support, and is very heavy on escalating to actual experts.
You don't have to remove 100% of the jobs to have huge effects. Automating large parts of a few sectors would already create significant disruptions.
I think this mentality must have its own imminent apocalypse. Gifted with an enormous increase in potential productivity, the decision is to do the same but cheaper? Who allocates capital to such spiritless commodification? It all feels like using a printing press to make one bible a month.
There must be a role that can be more productive. It might not necessarily be our skillsets that fit those roles - and the roles might be more stratified - but someone is going to be able to be do more, be paid more.
These days it's hard to get people to read an email longer then 5 lines - yet people are super excited about abundant masses of text generated by LLMs. It does not compute....
That's something that needs to be addressed by lawmakers ASAP. There needs to be a right to speak to a human, or (the perhaps overly tech optimistic route) a prohibition of AI that doesn't have adequate decision-making power.
- before 2012 there was no smartphone
- before 2001 there was no wikipedia
- before 1995 less then 10 percent of the rich country home users had internet
- before 2023 there was no ai available to home users.
Hardware has been getting faster by a factor of 100 in 10 years and ~10‘000 in 20 years. Ai currently develops faster because of a combination of software and hardware improvements. Even if the best current system is only right 1/100 times right now, its likely nearly allways accurate in 10 years.
I also like to remind people that the phone i am writing this on (iphone 12), has the same computing power as the earth simulator in 2003. that was the fastest computer on the earth back then.
Imagine this development and think what changes might come.
That's still almost three orders of magnitude from the iPhone 12 (0.02 Linpack TFLOPS, 4GB RAM, 256GB storage).
For things where the end customer doesn’t care if they’re interacting with an AI, reading content by an AI, etc. – or if the company doesn’t care what the customer thinks (see: automated phone customer support lines for the last twenty years) – the work will be replaced by AI work. Examples are any kind of rote documentation, generic digital asset creation like blog images, low level customer support, and most things where the company doesn’t really care about the customer, because the company is getting paid regardless.
If it does matter what the end customer thinks, the role will become increasingly humanistic in nature. Examples are high-end enterprise sales, personality and expertise-driven media and content, and anything where being “revealed” as an AI is perceived negatively.
"Triaging by LLM before sending task to any human" can work for almost anything, not just support calls. On another story I saw someone mention that they'd like something like an ad-blocker, but for content - a "content-blocker". Not too hard running even a local model that, via a browser extension, scnas the current page and places it into one of several bins: Read verbatim, summarise with ChatAI, Ignore completely, Read and mark for re-reading.
Software dev? Bin a ticket into "complex", "simple", "talk to lead dev".
Software proposal? Bin the proposal into "CotS available", "FOSS available", "Quick dev", "Too costly to proceed".
Bookkeeping? Accounting? They all have tasks that can be binned.
What does this all mean, I hear you ask? Well, you no longer need as many employees if some of the bins are "ChatAI and/or agent can complete this" with human review.
So, yeah, a lot of people are going to be out of work if this works like they say it does.
If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable. Why would tbe company get rid of them just at that moment?
Well, some will, because some companies are badly-run. Others will take advantage of the opportunity.
You're assuming unbounded demand for whatever product the company is producing. If demand for their product is bounded, having 1 dev produce the output of 5 devs means that the company is going to have devs simply sitting around doing nothing for most of the day.
> If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable.
I don't follow this argument - there is a practical limit to how much development a company requires. In the past they may have had a team of 10 to satisfy that limit. If the limit is satisfied by a team of 2 the company... does what exactly?
After all, a limit is a limit.
Where are these businesses that only ever want to sell the same amount of the same stuff forever?
AI will enable significantly faster economic growth, which is something the EU has been making impossible with legislation designed to destroy Europe's economic advantage.
(actually, MEGA would be a great acronym, but Trump's friends in the EU are more focused on dismantling it rather than making it great)
robotswantdata•1h ago
Been in this space over a decade and this time really is different. It’s hard for humans to perceive the exponential, it will be slow then sudden.
Madmallard•1h ago
What exactly will these agents be able to do with enough consistency, accuracy, and reliability that people will want to hire them over humans?
In my experience with even the most basic implementation of agents, i.e. customer service chat bots, I literally cannot stand interacting with them even once. They are extremely unhelpful and I will hang up or immediately ask to speak to a human.
edu•1h ago
exitb•1h ago
Stromgren•1h ago
robotswantdata•1h ago
I had same opinion till a few months ago, now would prefer the [redacted company so as to not give free marketing] AI agent. You’ll start seeing this wave in around 3-6 months as most are in trial
badgersnake•1h ago
(Let’s not talk about my blockchain startup and my VR startup and my NFT startup). My house is nice though.
chrz•1h ago
CodeCompost•1h ago
n4r9•1h ago
odyssey7•55m ago
n4r9•29m ago
Bear in mind this is a B2B enterprise company with a mix of legacy and greenfield. Might be different elsewhere.
ares623•1h ago
I don't want Codex dammit! I'm a Claude Code man.
askl•1h ago
tripledry•33m ago
True, but also there are perception biases that lead us to believe progress is exponential, even though it might as well be an S-curve.
I'm having a hard time finding the right terms, but I'm sure there is some bias to think that "the line goes up".