Oof
It's pay big tech or fall behind.
To be fair, the PowerPoint they were shown at that AI Synergies retreat probably was very slick.
It's almost like, and stay with me here, but it's almost like the vast majority of tech companies are now run by business graduates who do not understand tech AT ALL, have never written a single line of code in their lives, and only know how to optimize businesses by cutting costs and making the products worse until users revolt.
It is because they think it will 10x their chances of getting a really good engineer for 1/10th as cheap.
At least that is my theory. maybe i am wrong. i try to be charitable.
The initial years of adopting new tech have no net return because it's investment. The money saved is offset by the cost of setting up the new tech.
But then once the processes all get integrated and the cost of buying and building all the tech gets paid off, it turns into profit.
Also, some companies adopt new tech better than others. Some do it badly and go out of business. Some do it well and become a new market leader. Some show a net return much earlier than others because they're smarter about it.
No "oof" at all. This is how investing in new transformative business processes works.
> GenAI has been embedded in support, content creation, and analytics use cases, but few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.
They are not seeing the structural "disruptions" that were present for previous technological shifts.
Many new ideas came through promising to be "transformative" but never reached anywhere near the impact that people initially expected. Some examples: SOA, low-code/no-code, blockchain for anything other than cryptocurrency, IoT, NoSQL, the Semantic Web.
Each of these has had some impact, but they've all plateaued, and there are very good reasons (including the results cited in TA) to think GenAI has also plateaued.
My bet: although GenAI has plateaued, new variants will appear that integrate or are inspired by "old AI" ideas[0] paired with modern genAI tech, and these will bring us significantly more intelligent AI systems.
[0] a few examples of "old AI": expert systems, genetic algorithms, constraint solving, theorem proving, S-expression manipulation.
What are you talking about? The return on investment from computers was immediate and extremely identifiable. For crying out loud "computers" are literally named after the people whose work they automated.
With Personal Computers the pitch is similarly immediate. It's trivial to point at what labour VisiCalc automated & improved. The gains are easy to measure and for every individual feature you can explain what it's useful for.
You can see where this falls apart in the Dotcom Bubble. There are very clear pitches; "Catalogue store but over the internet instead of a phone" has immediately identifiable improvements (Not needing to ship out catalogues, being able to update it quickly, not needing humans to answer the phones)
But the hype and failed infrastructure buildout? Sure, Cisco could give you an answer if you asked them what all the internet buildout was good for. Not a concrete one with specific revenue streams attached, and we all know how that ends.
The difference between Pets.com and Amazon is almost laughably poignant here. Both ultimately attempts to make the "catalogue store but on the computer" work, but Amazon focussed on broad inventory and UX. They had losses, but managed to contain them and became profitable quickly (Q4 2001). Amazon's losses shrank as revenue grew.
Pets.com's selling point was selling you stuff below cost. Good for growth, certainly, but this also means that their losses grew with their growth. The pitch is clearly and inherently flawed. "How are you going to turn profitable?" We'll shift into selling less expensive goods "How are you going to do that?" Uhhh.....
...
The observant will note: This is the exact same operating model of the large AI companies. ChatGPT is sold below unit cost. Claude is sold below unit cost. Copilot is sold below unit cost.
What's the business pitch here? Even OpenAI struggles to explain what ChatGPT is actually useful for. Code assistants are the big concrete pitch and even those crack at the edges as research after research shows the benefits appear to be psychosomatic. Even if Moore's law hangs on long enough to bring inference cost down (nevermind per-task token usage skyrocketing so even that appears moot), what's the pitch. Who's going to pay for this?
Who's going to pay for a Personal Computer? Your accountant.
The requested URL was not found on this server. Apache/2.4.62 (Debian) Server at nanda.media.mit.edu Port 443
It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
It's my general experience, also in prior workplaces, that sometimes a little drawing can tell a lot, and there's no quicker way to start it than to walk 3 meters and grab a marker. Same for getting attention towards a particular part of the board. On Excalidraw, it's difficult to coordinate people dynamically. On a whiteboard, people just point to the parts they're talking about while talking instinctively, so you don't get person A arguing with person B about Y while B thinks they are talking about D which is pretty close to Y as a topic.
> It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
This is 2025, over Zoom, we use Gong, it records, transcribes and summarizes the action items and key discussion points. No need to take notes.
My diagrams are already in Lucid with notes
And also gaining information about the domain from the business and the business requirements for the system or feature.
This I largely agree with. If your tech job can be done from Bozeman instead of the Bay Area there's a decent chance it can be done from Bangalore.
> which itself is an inevitable milestone toward full automation
But IMHO this doesn't follow at all. Plenty of factory work (e.g. sewing) was offshored decades ago but is still done by humans (in Bangladesh or wherever) rather than robots. I don't see why the fact that a job can move from the Bay Area to Bozeman to Bangalore inherently means it can be replaced with AI.
I would have been hard pressed to find a decent paying remote work as a fully hands on keyboard developer. My one competitive advantage is that I am in the US and can fly out to a customer’s site and talk to people who control budgets and I’m a better than average English communicator.
In person collobaration though is over rated. I’ve led mid six figure cross organization implementations for the last five years sitting at my desk at home with no pants on using Zoom, a shared Lucid App document and shared Google Docs.
Jobs like customer/tech support aren't uniquely suited to outsourcing. (Quite the opposite; People rightfully complain about outsourced support being awful. Training outsourced workers on the fine details of your products/services & your own organisation, nevermind empowering them to do things is much harder)
They're jobs that companies can neglect. Terrible customer support will hurt your business, but it's not business-critical in the way that outsourced development breaking your ability to put out new features and fixes is.
AI is a perfect substitute for terrible outsourced support. LLMs aren't capable of handling genuinely complex problems that need to be handled with precision, nor can they be empowered to make configuration changes. (Consider: Prompt-injection leading to SIM hijacking and other such messes.)
But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
I worked in a call center before getting into tech when I was young. I don't have any hard statistics, but by far the majority of calls to support were basic questions or situations (like Meemaw's router) that could easily be solved with a chatbot. If not that, the requests that did require action on accounts could be handled by an LLM with some guardrails, if we can secure against prompt injection.
Companies can most likely eliminate a large chunk of customer service employees with an LLM and the customers would barely notice a difference.
You could anticipate a shift to using AI tools to achieve whatever content moderation goals these large networks have, with humans only handling the uncertain cases.
Still brain damage, but less. A good thing?
If we project long term, could this mean that countries with the most capital to invest in AI and robotics (like the U.S.) could take back manufacturing dominance from countries with low wages (like China)?
And the idea that China has low wages is outdated. Companies like Apple don't use China for its low wages, countries like Vietnam have lower wages. China's strength lies in its manufacturing expertise
The reason US manufacturers aren’t interested in taking small volume low cost orders is that they have more than enough high margin high quality orders to deal with. Even the small-ish machine shop out in the country near the farm fields by some of my family’s house has pivoted into precision work for a big corporation because it pays better than doing small jobs
Tim Cook explains it better that I could ever do:
Tim Cook had a direct hand in this and know it and is now deflecting because it looks bad.
One of the comments on the video puts it way better than I could:
@cpaviolo : "He’s partially right, but when I began my career in the industry 30 years ago, the United States was full of highly skilled workers. I had the privilege of being mentored by individuals who had worked on the Space Shuttle program—brilliant professionals who could build anything. I’d like to remind Mr. Cook that during that time, Apple was manufacturing and selling computers made in the U.S., and doing so profitably.
Things began to change around 1996 with the rise of outsourcing. Countless shops were forced to close due to a sharp decline in business, and many of those exceptionally skilled workers had to find jobs in other industries. I remember one of my mentors, an incredibly talented tool and die maker, who ended up working as a bartender at the age of 64.
That generation of craftsmen has either retired or passed away, and the new generation hasn’t had the opportunity to learn those skills—largely because there are no longer places where such expertise is needed. On top of that, many American workers were required to train their Chinese replacements. Jobs weren’t stolen by China; they were handed over by American corporations, led by executives like Tim Cook, in pursuit of higher profits."
Though I think we should also disabuse ourselves of the idea that this can't ever be the case.
An obvious example that comes to mind is the US' inability to do anything cheaply anymore, like build city infrastructure.
Also, once you enumerate the reasons why something is happening somewhere but not in the US, you may have just explained how they are better de facto than the US. Even if it just cashes out into bureaucracy, nimbyism, politics, lack of will, and anything else that you wouldn't consider worker skillset. Those are just nation-level skillsets and products.
The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
Some part of China have higher average salaries than some Eastern European countries.
The chance of a robotic industry in the US moving massively jobs from China only due to a pseudo A.I revolution replacing low paid wages (without other external factors, e.g tarifs or sanctions) is close to 0.
Now if we do speak about India and the low skill IT jobs there. The story is completely different.
The wages for factory work in a few Eastern European countries are cheaper than Chinese wages. I suppose they don’t have the access to infrastructure and supply chains the Chinese do but that is changing quickly do to the Russian war against Ukraine
Then why hasn't it yet? In fact, some lower-wage countries such as China are on the forefront of industrial automation?
I think the bottom line is that many Western countries went out of their way to make manufacturing - automated or not - very expensive and time-consuming to get off the ground. Robots don't necessarily change that if you still need to buy land, get all the permits, if construction costs many times more, and if your ongoing costs (energy, materials, lawyers, etc) are high.
We might discover that AI capacity is easier to grow in these markets too.
Because the current companies are behind the curve. Most of finance still runs on Excel. A lot of other things, too. AI doesn't add much to that. But the new wave of Tech-first companies now have the upper hand since the massive headcount is no longer such an advantage.
This is why Big Tech is doing layoffs. They are scared. But the traditional companies would need to redo the whole business and that is unlikely to happen. Not with the MBAs and Boomers running the board. So they are doing the old stupid things they know, like cutting costs by offshoring everything they can and abusing visas. They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom. See how S&P 500 - top 10 is flat or dumping.
Right. And AI is here to fix that!
If only because someone else has to build all the nuclear reactors that supply the data centers with electricity. /s
But it does make sense on a superficial level at least: why pay a six-pack of nobodies half-way 'round the world to.. use AI tools on your behalf? Just hire a mid/senior developer locally and have them do it.
Or err, since that's been taken down: https://web.archive.org/web/20250818145714/https://nanda.med...
The fundamental issue is wealth inequality. The ultimate forms of wealth redistribution are war and revolution. I personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
The issue is that there are a handful of people who are incredibly wealthy and are only getting wealthier. The majority of the population is struggling to survive and only getting poorer.
AI and automation will be used to further displace working people to eke out a tiny percentage increase in profits, which will furhter this inequality as people can no longer afford to live. Plus those still working will have their wages suppressed.
Offshored work originally dsiplaced local workers and created a bunch of problems. AI and automation is a rising tide at this point. Many in tech considered themselves immune to such trends, being highly technical and educated professionals. Those people are in for a very rude shock and it'll happen sooner than they think.
Our politics is divided by those who want to blame marginalized groups (eg immigrants, trans people, "woke" liberals) for declining material conditions (and thus we get Brownshirts and concentration camps) and the other side who wants to defend the neoliberal status quo in the name of institutional norms.
It's about economics, material conditions and, dare I say it, the workers relationship to the means of production.
I do think more or less this too, but it could be 4 years or 40 before people get mad enough. And to be honest the tech gap between civilian violence and state sponsored violence has never been wider. OR in other words, civilians don't have reaper drones etc etc.
As for the tech gap, I disagree.
The history of post-WW2 warfare is that asymmetric warfare has been profoundly successful, to the poin twhere the US hasn't won a single war (except, arguably, Grenada, if that counts, which it does not) since 1945. And that's a country that spends more on defence that something like the next 23 countries combined (IIRC).
Obviously war isn't exact the same thing but it's honestly not that different to suppressing violent dissent. The difficulty (since 1945) hasn't been defeating an opposing military on the battlefield. The true cost is occupying territory after the fact. And that is basically the same thing.
Ordinary people may not have reeaper drones and as we've seen in Ukraine, consumer drops are still capable of dropping a hand grenade.
Suppressing an insurrection or revolt is unbelievably expensive in terms of manpower, equipment and political will. It is absolutely untenable in the long term.
Not sure how long it will take for a critical mass to realize that that we are in a class war, and placing the blame on anything else won't solve the problem.
IOW, I agree with you, I also think we are beyond the point where electoral politics can solve it - we have full regulatory capture by the wealthy now. When governments can force striking workers back to work, workers have zero power.
What I wonder though, is why do the wealthy allow this to persist? What's the end game here, when no one can afford to live, whose buying products and services? There'll be nothing to keep the economy going. The wealthy can end it at any time, so what is the real goal? To be the only ones left on earth?
They're so aware of the power of class solidarity that they've designed society to ensure that there is no class solidarity among the working class. All of the hot button social issues are intentionally divisive to avoid class solidarity.
To be ultra-wealthy requires you to be a sociopath, to believe the bullshit that you deserve to be wealthy, it's because of how good you are and, more importantly, that any poverty is a personal moral failure.
You see this manifest with the popularity of transhumanism in tech circles. And transhumanism is nothing more than eugenics. Extend this further and you believe that future war and revolution when many people die is actually good because it'll separate the wheat from the chaff, so to speak.
On top of all that, in a world of mobile capital, the ultra-wealthy ultimately believe they can escape the consequences of all this. Switzerland, a Pacific island, space, or, you know, Mars.
The neofeudalistic future the ultra-wealthy desire will be one where they are protected from the consequences of their actions on massive private estate where a handful of people service their needs. Working people will own nothing and live in worker housing. If a few billion of them have to die, so be it.
[1] Although I wouldn’t be surprised if some of the people who argue about this topic online are already independently wealthy
Your point hinges on: declining material conditions.
It is completely false - the conditions are pretty great for everyone. People have good wages relatively but sure inequality is increasing.
Since your main point is incorrect I don’t think your other points follow.
High value product work remains safe from AI automation for now, but it was also safe from offshoring so long as domestic capacity existed.
It may just be incompetence in large organisations though. Things get outsourced because nobody wants to manage them.
I'm sorry Dave. I can't answer that.
what this person has started is being done by WITCH companies at largest scale and most fraudulent way possible
It also improves brand reputation by actually paying attention to what customers are saying and responding in a timely manner, with expert-level knowledge, unlike typical customer service reps.
I've used LLMs to help me fix Windows issues using pretty advanced methods, that MS employees would have just told me to either re-install Windows or send them the laptop and pay $hundreds.
But I can’t imagine ever calling tech support for help unless it is more than troubleshooting and I need them to actually do something in their system or it’s a hardware problem where I need a replacement.
Over the past 3 years of calling support any service or infrastructure (bank, health insurance, doctor, wtv), over like 90% of my requests were things only solvable via customer support or escalation.
I only keep track because I document when I didn't need support into a list of "phone hacks" (like press this sequence of buttons when calling this provider).
Most recently, I went to an urgent care facility a few weekends ago, and they keep submitting claims to the arm, of my insurance, that is officed in a different state instead of my proper state.
That is because search is still mostly stuck in ~2003. But now ask the exact same thing of an LLM and it will generally be able to provide useful links. There's just so much information out there, but search engines just suck because they lack any sort of meaningful natural language parsing. LLMs provide that.
Cancel account- have them call someone.
Withdraw too much - make it a phone call.
Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.
Etc.
That doesn't make much sense. Either your system can handle it or it can't. Putting a support agent in front isn't going to change that.
Perhaps there is a group that isn’t served by legacy ui discovery methods and it’s great for them, but 100% of chat bots I’ve interacted with have damaged brand reputation for me.
All my interactions with any AI support so far is repeatedly saying "call human" until it calls human
I.e. AI isn't allowed to offer me a refund because my order never arrived. For that, I have to spend 20 minutes on the phone with Mike from India.
99% seems like a pulled-out-of-your-butt number and hyperbolic, but, yes, there's clearly a non-trivial percentage of customer support that's absolutely terrible.
Please keep in mind, though, that a lot of customer support by monopolies is intended to be terrible.
AI seems like a dream for some of these companies to offer even worse customer service, though.
Where customer support is actually important or it's a competitive market, you tend to have relatively decent customer support - for example, my bank's support is far from perfect, but it's leaps and bounds better than AT&T or Comcast.
At my job, thanks to AI, we managed to rewrite one of our boxed vendor tools we were dissatisfied with, to an in-house solution.
I'm sure the company we were ordering from misses the revenue. The SaaS industry is full of products whose value proposition is 'it's cheaper to buy the product from us than hire a guy who handles it in house'
There are projects I lead now that I would have at least needed one or maybe two junior devs to do the grunt work after I have very carefully specified requirements (which I would have to do anyway) and diagrams and now ChatGPT can do the work for me.
That’s never been the case before and I’ve personally gone from programming in assembly, to C, to higher level languages and on the hardware side, personally managing the build out of a data center that had an entire room dedicated to a SAN with a whopping 3TB of storage to being able to do the same with a yaml/HCL file.
It worked out pretty well. Who knows how the software engineering landscape will change in 10 to 20 years?
I enjoyed Andrej Karpathy's talk about software in the era of AI.
You might well see more software profits if costs go down but less revenue. Depends on Jevon's paradox really
toomuchtodo•1h ago
Original title "AI is already displacing these jobs" tweaked using context from first paragraph to be less clickbaity.
chihuahua•1h ago
Davexon•1h ago