https://nanda.media.mit.edu/ai_report_2025.pdf
https://web.archive.org/web/20250818145714/https://nanda.media.mit.edu/ai_report_2025.pdf
https://nanda.media.mit.edu/ai_report_2025.pdf
https://web.archive.org/web/20250818145714/https://nanda.media.mit.edu/ai_report_2025.pdf
The requested URL was not found on this server. Apache/2.4.62 (Debian) Server at nanda.media.mit.edu Port 443
They interviewed 52 people, and some of them said yes to "Have you reduced headcount due to GenAI?" - which may indicate that those people believe this to be true.
It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
It's my general experience, also in prior workplaces, that sometimes a little drawing can tell a lot, and there's no quicker way to start it than to walk 3 meters and grab a marker. Same for getting attention towards a particular part of the board. On Excalidraw, it's difficult to coordinate people dynamically. On a whiteboard, people just point to the parts they're talking about while talking instinctively, so you don't get person A arguing with person B about Y while B thinks they are talking about D which is pretty close to Y as a topic.
> It's not meant to be the actual documentation, and it makes sense to me since you don't want to write the actual documentation during the discussion with multiple highly paid devs and managers. Just take a photo at the end, and it's saved for when you make the documentation.
This is 2025, over Zoom, we use Gong, it records, transcribes and summarizes the action items and key discussion points. No need to take notes.
My diagrams are already in Lucid with notes
And also gaining information about the domain from the business and the business requirements for the system or feature.
This I largely agree with. If your tech job can be done from Bozeman instead of the Bay Area there's a decent chance it can be done from Bangalore.
> which itself is an inevitable milestone toward full automation
But IMHO this doesn't follow at all. Plenty of factory work (e.g. sewing) was offshored decades ago but is still done by humans (in Bangladesh or wherever) rather than robots. I don't see why the fact that a job can move from the Bay Area to Bozeman to Bangalore inherently means it can be replaced with AI.
I would have been hard pressed to find a decent paying remote work as a fully hands on keyboard developer. My one competitive advantage is that I am in the US and can fly out to a customer’s site and talk to people who control budgets and I’m a better than average English communicator.
In person collobaration though is over rated. I’ve led mid six figure cross organization implementations for the last five years sitting at my desk at home with no pants on using Zoom, a shared Lucid App document and shared Google Docs.
Oof
It's pay big tech or fall behind.
To be fair, the PowerPoint they were shown at that AI Synergies retreat probably was very slick.
It's almost like, and stay with me here, but it's almost like the vast majority of tech companies are now run by business graduates who do not understand tech AT ALL, have never written a single line of code in their lives, and only know how to optimize businesses by cutting costs and making the products worse until users revolt.
The reason a competitive ecosystem of tech companies is effective has less to do with market hand magic and more to do with big companies being dumb and conservative, largely as a consequence of their leader selection criteria.
Microsoft missing web and mobile.
Intel missing mobile and GPU.
Google missing productizing AI.
It is because they think it will 10x their chances of getting a really good engineer for 1/10th as cheap.
At least that is my theory. maybe i am wrong. i try to be charitable.
I genuinely do, but kind of paradoxically also suspect I'm wrong. It's simply that it's something so far outside my domain that I just can't really appreciate their skills honed over many years of practice and training, because all I get to externally see are their often ridiculous limitations, failures, and myopia.
I imagine this is, in many ways, how people who have no understanding of e.g. software, let alone software development, see software engineers. I don't think it's uncharitable, it's just human nature. Imagine if we were the ones hiring CEOs. 'That guys a total asshat, and we can get ten guys in India - hard working, smart guys, standouts in 1.4 billion people - for the same price.' Go go go.
On the other hand, designing the software or engineering a solution to the problem seems like something they could do, as far as they know, because it's not something concrete that they can look at and see is beyond their abilities.
And if the agency doesn't do that, the good engineer will figure out he's being underpaid as slop-for-hire cannon fodder and move on his own accord.
I talk about ie taking ActiveMQ, building it on your own and tweaking various calls and internal parameters to achieve cca 10x performance boost compared to just vanilla installation. Companies bundling it as part of the product would kill and pay serious money for such distro. Guy did this in maybe 3-4 days from never touching ActiveMQ or any other similar messaging system before to have it reliably working and moving to next thing.
These folks can be dangerous though, they come up with complex solution that can be extremely hard to maintain, debug and evolve by others. So their added value on long enough time scale can be actually negative even for quite senior but not absolutely top notch brilliant team. Not something 'code ninjas' (or as I call them brilliant juniors) care about, but if you work on something long term you will see this pattern from time to time.
Also these folks are hard to keep since they get bored when things slow down and big challenges are not around, and quickly and easily move on. Making the issue above pretty serious item to consider.
The initial years of adopting new tech have no net return because it's investment. The money saved is offset by the cost of setting up the new tech.
But then once the processes all get integrated and the cost of buying and building all the tech gets paid off, it turns into profit.
Also, some companies adopt new tech better than others. Some do it badly and go out of business. Some do it well and become a new market leader. Some show a net return much earlier than others because they're smarter about it.
No "oof" at all. This is how investing in new transformative business processes works.
> GenAI has been embedded in support, content creation, and analytics use cases, but few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior.
They are not seeing the structural "disruptions" that were present for previous technological shifts.
Many new ideas came through promising to be "transformative" but never reached anywhere near the impact that people initially expected. Some examples: SOA, low-code/no-code, blockchain for anything other than cryptocurrency, IoT, NoSQL, the Semantic Web. Each of these has had some impact, but they've all plateaued, and there are very good reasons (including the results cited in TA) to think GenAI has also plateaued.
My bet: although GenAI has plateaued, new variants will appear that integrate or are inspired by "old AI" ideas[0] paired with modern genAI tech, and these will bring us significantly more intelligent AI systems.
[0] a few examples of "old AI": expert systems, genetic algorithms, constraint solving, theorem proving, S-expression manipulation.
Can't wait for Lisp to be the language of the future again.
Some of my friends reckon it'll happen the year after the year of Linux on the desktop. They're on Windows 11, though, so I don't know how to read that.
What are you talking about? The return on investment from computers was immediate and extremely identifiable. For crying out loud "computers" are literally named after the people whose work they automated.
With Personal Computers the pitch is similarly immediate. It's trivial to point at what labour VisiCalc automated & improved. The gains are easy to measure and for every individual feature you can explain what it's useful for.
You can see where this falls apart in the Dotcom Bubble. There are very clear pitches; "Catalogue store but over the internet instead of a phone" has immediately identifiable improvements (Not needing to ship out catalogues, being able to update it quickly, not needing humans to answer the phones)
But the hype and failed infrastructure buildout? Sure, Cisco could give you an answer if you asked them what all the internet buildout was good for. Not a concrete one with specific revenue streams attached, and we all know how that ends.
The difference between Pets.com and Amazon is almost laughably poignant here. Both ultimately attempts to make the "catalogue store but on the computer" work, but Amazon focussed on broad inventory and UX. They had losses, but managed to contain them and became profitable quickly (Q4 2001). Amazon's losses shrank as revenue grew.
Pets.com's selling point was selling you stuff below cost. Good for growth, certainly, but this also means that their losses grew with their growth. The pitch is clearly and inherently flawed. "How are you going to turn profitable?" We'll shift into selling less expensive goods "How are you going to do that?" Uhhh.....
...
The observant will note: This is the exact same operating model of the large AI companies. ChatGPT is sold below unit cost. Claude is sold below unit cost. Copilot is sold below unit cost.
What's the business pitch here? Even OpenAI struggles to explain what ChatGPT is actually useful for. Code assistants are the big concrete pitch and even those crack at the edges as research after research shows the benefits appear to be psychosomatic. Even if Moore's law hangs on long enough to bring inference cost down (nevermind per-task token usage skyrocketing so even that appears moot), what's the pitch. Who's going to pay for this?
Who's going to pay for a Personal Computer? Your accountant.
I am not going to do that. If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.
An apple II cost $1300. VisiCalc cost $200. An accountant in that time would've cost ~10x that annually and would either spend quite a bit more than 10% doing the rote work, or hire dedicated people for it.
Reality is complicated and messy. There are many hurdles to overcome, many people to convince and many logistics to handle. You can't just replace accountants with computers - it takes time. You can understand why I find it hard to believe that a huge jump like the one with software can take time as well.
Back then companies needed a massive amount of people to sit and do all the calculations to do their accounting, but a single person using a computer could do the same work in a day. This was so easy and efficient that almost every bigger company started buying computers at the time.
You don't need to automate away accountants, you just need to automate away the many thousands of calculations needed to complete the accounting to save a massive amount of money. It wasn't hard to convince people to use a computer instead of sitting for weeks manually calculating sums on sheets.
It is well-documented, and called the "productivity paradox of computers" if you want to look it up. It was identified in 1987, and economic statistics show that personal computing didn't become a net positive for the economy until around 1995-1997.
And like I said, it's very dependent on the individual company. But consider how many businesses bought computers and didn't use them productively. Where it was a net loss because the computers were expensive and the software was expensive and the efficiency gained wasn't worth the cost -- or worse, they weren't a good match and efficiency actually dropped. Think of how many expensive attempted migrations from paper processes to early databases failed completely.
It's economic analysis of the entire economy, from the "outside" (statistics) inward. My point is that the individual business case was financially solvent.
Apple Computer did not need to "change the world" it needed to sell computers at a profit, enough of them to cover their fixed costs, and do so without relying on other people just setting their money on fire. (And it succeeded on all three counts.) Whether or not they were a minute addition to the entire economy or a gigantic one is irrelevant.
Similarly with AI. AI does not need to "increase aggregate productivity over the entire economy", it needs to turn a profit or it dies. Whether or not it can keep the boomer pension funds from going insolvent is a question for economics wonks. Ultimately the aggregate economic effects follow from the individual one.
Thus the difference. PCs had a "core of financial solvency" nearly immediately. Even if they weren't useful for 99.9% of jobs that 0.1% would still find them useful enough to buy and keep the industry alive. If the hype were to run out on such an industry, it shrinks to something sustainable. (Compare: Consumer goods like smartwatches, which were hyped for a while, and didn't change the world but maintained a suitable core audience to sustain the industry)
With AI, even AI companies struggle to pitch such a core, nevermind actually prove it.
I don't really understand what point you're trying to make. It seems like you're complaining that CapEx costs are higher in GenAI than they were in personal computing? But lots of industries have high CapEx. That's what investors are for.
The only point I've made is that "95% of organizations are getting zero return" is to be expected in the early days of a new technology, and that the personal computer is a reasonable analogy here. The subject here is companies that use the tech, not companies creating the tech. The investment model behind the core tech has nothing to do with the profitability of companies trying to use it or build on it. The point is that it takes a lot of time and trial and error to figure out how to use a new tech profitably, and we are currently in very early days of GenAI.
Computing was revolutionary, both at enterprise and personal scale (separately). I would say smartphones were revolutionary. The internet was revolutionary, though it did take a while to get going at scale.
Blockchain was not revolutionary.
I think LLM-based AI is trending towards blockchain, not general purpose computing. In order for it to be revolutionary, it needs to objectively and quantifiably add value to the lives (professionally or personally) of a significant piece of the population. I don't see how that happens with LLMs. They aren't reliable enough and don't seem to have any path towards reasoning or understanding.
Jobs like customer/tech support aren't uniquely suited to outsourcing. (Quite the opposite; People rightfully complain about outsourced support being awful. Training outsourced workers on the fine details of your products/services & your own organisation, nevermind empowering them to do things is much harder)
They're jobs that companies can neglect. Terrible customer support will hurt your business, but it's not business-critical in the way that outsourced development breaking your ability to put out new features and fixes is.
AI is a perfect substitute for terrible outsourced support. LLMs aren't capable of handling genuinely complex problems that need to be handled with precision, nor can they be empowered to make configuration changes. (Consider: Prompt-injection leading to SIM hijacking and other such messes.)
But the LLM can tell meemaw to reset her dang router. If that's all you consider support to be (which is almost certainly the case if you outsource it), then you stand nothing to lose from using AI.
I worked in a call center before getting into tech when I was young. I don't have any hard statistics, but by far the majority of calls to support were basic questions or situations (like Meemaw's router) that could easily be solved with a chatbot. If not that, the requests that did require action on accounts could be handled by an LLM with some guardrails, if we can secure against prompt injection.
Companies can most likely eliminate a large chunk of customer service employees with an LLM and the customers would barely notice a difference.
You could anticipate a shift to using AI tools to achieve whatever content moderation goals these large networks have, with humans only handling the uncertain cases.
Still brain damage, but less. A good thing?
If we project long term, could this mean that countries with the most capital to invest in AI and robotics (like the U.S.) could take back manufacturing dominance from countries with low wages (like China)?
And the idea that China has low wages is outdated. Companies like Apple don't use China for its low wages, countries like Vietnam have lower wages. China's strength lies in its manufacturing expertise
The reason US manufacturers aren’t interested in taking small volume low cost orders is that they have more than enough high margin high quality orders to deal with. Even the small-ish machine shop out in the country near the farm fields by some of my family’s house has pivoted into precision work for a big corporation because it pays better than doing small jobs
The other factors are: In any sort of manufacturing, the only time you are making money is when the equipment is making product.
If you are stopped for a change over or setup you are losing money. Changing over contains risk of improper setup, where you lose even more money since you produce unusable product.
Where I live, the local machine shops support themselves in two way: 1. Consistent volume work for an established customer. 2. Emergency work for other manufacturing sites: repair or reverse engineering and creating parts to support equipment(fast turn around and high cost)
They are willing to do small batches but lead times will be long since they have to work it into their production schedules.
Tim Cook explains it better that I could ever do:
Tim Cook had a direct hand in this and know it and is now deflecting because it looks bad.
One of the comments on the video puts it way better than I could:
@cpaviolo : "He’s partially right, but when I began my career in the industry 30 years ago, the United States was full of highly skilled workers. I had the privilege of being mentored by individuals who had worked on the Space Shuttle program—brilliant professionals who could build anything. I’d like to remind Mr. Cook that during that time, Apple was manufacturing and selling computers made in the U.S., and doing so profitably.
Things began to change around 1996 with the rise of outsourcing. Countless shops were forced to close due to a sharp decline in business, and many of those exceptionally skilled workers had to find jobs in other industries. I remember one of my mentors, an incredibly talented tool and die maker, who ended up working as a bartender at the age of 64.
That generation of craftsmen has either retired or passed away, and the new generation hasn’t had the opportunity to learn those skills—largely because there are no longer places where such expertise is needed. On top of that, many American workers were required to train their Chinese replacements. Jobs weren’t stolen by China; they were handed over by American corporations, led by executives like Tim Cook, in pursuit of higher profits."
Though I think we should also disabuse ourselves of the idea that this can't ever be the case.
An obvious example that comes to mind is the US' inability to do anything cheaply anymore, like build city infrastructure.
Also, once you enumerate the reasons why something is happening somewhere but not in the US, you may have just explained how they are better de facto than the US. Even if it just cashes out into bureaucracy, nimbyism, politics, lack of will, and anything else that you wouldn't consider worker skillset. Those are just nation-level skillsets and products.
The idea that China is a low wages country should just die. It was the case 10y ago, not anymore.
Some part of China have higher average salaries than some Eastern European countries.
The chance of a robotic industry in the US moving massively jobs from China only due to a pseudo A.I revolution replacing low paid wages (without other external factors, e.g tarifs or sanctions) is close to 0.
Now if we do speak about India and the low skill IT jobs there. The story is completely different.
The wages for factory work in a few Eastern European countries are cheaper than Chinese wages. I suppose they don’t have the access to infrastructure and supply chains the Chinese do but that is changing quickly do to the Russian war against Ukraine
Then why hasn't it yet? In fact, some lower-wage countries such as China are on the forefront of industrial automation?
I think the bottom line is that many Western countries went out of their way to make manufacturing - automated or not - very expensive and time-consuming to get off the ground. Robots don't necessarily change that if you still need to buy land, get all the permits, if construction costs many times more, and if your ongoing costs (energy, materials, lawyers, etc) are high.
We might discover that AI capacity is easier to grow in these markets too.
Because the current companies are behind the curve. Most of finance still runs on Excel. A lot of other things, too. AI doesn't add much to that. But the new wave of Tech-first companies now have the upper hand since the massive headcount is no longer such an advantage.
This is why Big Tech is doing layoffs. They are scared. But the traditional companies would need to redo the whole business and that is unlikely to happen. Not with the MBAs and Boomers running the board. So they are doing the old stupid things they know, like cutting costs by offshoring everything they can and abusing visas. They end up losing knowledgeable people who could've turned the ship around, the remaining employees become apathetic/lazy, and brand loyalty sinks to the bottom. See how S&P 500 - top 10 is flat or dumping.
Right. And AI is here to fix that!
If only because someone else has to build all the nuclear reactors that supply the data centers with electricity. /s
But it does make sense on a superficial level at least: why pay a six-pack of nobodies half-way 'round the world to.. use AI tools on your behalf? Just hire a mid/senior developer locally and have them do it.
Or err, since that's been taken down: https://web.archive.org/web/20250818145714/https://nanda.med...
If it requires a managed change, engineering team helps them draft the execution and schedule.
Skills would be similar to IT or dev ops but with expectation that they can code.
Moreover these kind of upgrades sometimes involve unforeseen regressions which again can’t be solved by these employees.
The fundamental issue is wealth inequality. The ultimate forms of wealth redistribution are war and revolution. I personally believe we are already beyond the point where electoral politics can solve this issue and a violent resolution is inevitable.
The issue is that there are a handful of people who are incredibly wealthy and are only getting wealthier. The majority of the population is struggling to survive and only getting poorer.
AI and automation will be used to further displace working people to eke out a tiny percentage increase in profits, which will furhter this inequality as people can no longer afford to live. Plus those still working will have their wages suppressed.
Offshored work originally dsiplaced local workers and created a bunch of problems. AI and automation is a rising tide at this point. Many in tech considered themselves immune to such trends, being highly technical and educated professionals. Those people are in for a very rude shock and it'll happen sooner than they think.
Our politics is divided by those who want to blame marginalized groups (eg immigrants, trans people, "woke" liberals) for declining material conditions (and thus we get Brownshirts and concentration camps) and the other side who wants to defend the neoliberal status quo in the name of institutional norms.
It's about economics, material conditions and, dare I say it, the workers relationship to the means of production.
I do think more or less this too, but it could be 4 years or 40 before people get mad enough. And to be honest the tech gap between civilian violence and state sponsored violence has never been wider. OR in other words, civilians don't have reaper drones etc etc.
As for the tech gap, I disagree.
The history of post-WW2 warfare is that asymmetric warfare has been profoundly successful, to the poin twhere the US hasn't won a single war (except, arguably, Grenada, if that counts, which it does not) since 1945. And that's a country that spends more on defence that something like the next 23 countries combined (IIRC).
Obviously war isn't exact the same thing but it's honestly not that different to suppressing violent dissent. The difficulty (since 1945) hasn't been defeating an opposing military on the battlefield. The true cost is occupying territory after the fact. And that is basically the same thing.
Ordinary people may not have reeaper drones and as we've seen in Ukraine, consumer drops are still capable of dropping a hand grenade.
Suppressing an insurrection or revolt is unbelievably expensive in terms of manpower, equipment and political will. It is absolutely untenable in the long term.
Not sure how long it will take for a critical mass to realize that that we are in a class war, and placing the blame on anything else won't solve the problem.
IOW, I agree with you, I also think we are beyond the point where electoral politics can solve it - we have full regulatory capture by the wealthy now. When governments can force striking workers back to work, workers have zero power.
What I wonder though, is why do the wealthy allow this to persist? What's the end game here, when no one can afford to live, whose buying products and services? There'll be nothing to keep the economy going. The wealthy can end it at any time, so what is the real goal? To be the only ones left on earth?
They're so aware of the power of class solidarity that they've designed society to ensure that there is no class solidarity among the working class. All of the hot button social issues are intentionally divisive to avoid class solidarity.
To be ultra-wealthy requires you to be a sociopath, to believe the bullshit that you deserve to be wealthy, it's because of how good you are and, more importantly, that any poverty is a personal moral failure.
You see this manifest with the popularity of transhumanism in tech circles. And transhumanism is nothing more than eugenics. Extend this further and you believe that future war and revolution when many people die is actually good because it'll separate the wheat from the chaff, so to speak.
On top of all that, in a world of mobile capital, the ultra-wealthy ultimately believe they can escape the consequences of all this. Switzerland, a Pacific island, space, or, you know, Mars.
The neofeudalistic future the ultra-wealthy desire will be one where they are protected from the consequences of their actions on massive private estate where a handful of people service their needs. Working people will own nothing and live in worker housing. If a few billion of them have to die, so be it.
[1] Although I wouldn’t be surprised if some of the people who argue about this topic online are already independently wealthy
Competition is why you have good products. Can you explain to me what incentivizes Apple to make functional and impressive iPhones instead of selling us barely working phones without cameras?
2nd: If I think capitalists are parasites, why would I attribute any good that Apple makes to them? Clearly it is the people doing real work who is to praise for that.
They make up the bulk of that organism.
Your point hinges on: declining material conditions.
It is completely false - the conditions are pretty great for everyone. People have good wages relatively but sure inequality is increasing.
Since your main point is incorrect I don’t think your other points follow.
1. The stagnation or decline in real wages in the developed world in recent decades;
2. Increasing homelessness as a consequence of the housing affordability crisis;
3. How global poverty has increased in the last century under capitalism. This surprises some because defenders claim the opposite. China is singlehandedly responsible to massive decrease in extreme poverty in the 20th century.
Maybe you're looking through the lens of tech. After all, we all have Internet-connected supercomputers in our pockets. While that's true, we're also working 3 jobs to pay for a 1 bedroom apartment where once a single job meant you had a house and enough to eat.
Extreme poverty throughout the world has dramatically reduced. In Western Europe it came down from 50% to less than 1% through the 20th century.
India brought it down dramatically and is continuing to do it. A simple Wikipedia search can tell you this.
Wages has been increasing in china, India as well as USA after accounting for inflation. It’s sort of stagnant in Europe.
Dismissing people with arguments doesn't work either. It doesn’t eliminate the feeling of inequality or change people's perspective about absolute vs relative wealth.
Why? Because the promise used to justify labor - that hard work will be rewarded - was deeply believed. The contradiction becomes visible when the wealthy hold 36,000 times more wealth than the average person[1]. No one can work 36,000 times harder or longer than someone else, so the belief is no longer tenable.
That leaves us with two choices: either acknowledge that "hard work alone" was never the full story, or take real steps to fix inequality. Pointing to poverty reduction in other countries doesn’t resolve this. It simply makes people feel unheard and resentful.
Average billionaire has $7B in wealth. Median individual U.S. wealth $190,000.
Your first mistake is thinking hard work matters. No it doesn't and it shouldn't. Only work that provides value should matter - you don't deserve more money just for working 10x hard but when it doesn't matter to anyone.
Your entire comment hinges on a zero sum line of thinking and I don't abide by it. Things have improved for everyone as I have said above but I also acknowledged that inequality is increasing. Inequality rising is a real issue.. it can be tackled but lets first acknowledge that prosperity has increased for pretty much everyone in the world.
Can you provide a source to backup your claim?
https://www.cnbc.com/2022/07/19/heres-how-labor-dynamism-aff...
https://www.americanbar.org/groups/crsj/resources/human-righ...
https://www.cbo.gov/publication/59510 shows this. Bottom 20% wages after accounting for benefits and taxes have significantly increased. If you want to answer the question: are the bottom 20% materially more well off at 1960's than now - this is your answer. Hourly wages without accounting for benefits is missing a crucial element so not really indicative of reality.
Caveat: this shows the bottom quintile (20th percentile) and after looking at the data it appears to be a change of ~60% of real disposable income from 1978 to 2020. 10th percentile would be similar.
TL;DR: if you use real disposable income that accounts for taxes and benefits (what really matters) the wages have not stagnated for anyone but increased a lot - by almost 60%.
It’s very telling that instead of refuting my point you instead choose to derail the discussions into a personal attack. Were you discussing in good faith you would try to understand what I said and reply to it.
It’s not universal at all that people are less prosperous now.
Why don’t you do good faith research and try to answer whether the bottom earners are actually better off now than before? You will come to the same conclusion.
I literally acknowledged that prosperity has increased for people in other parts of the world.
Why don't you rewrite my comment so that it's acceptable to you and then we'll discuss that?
If unaddressed - ie by dismissal - it doesn't go away. It simply festers. It will fester until it ruptures. Ignoring it or minimizing it doesn't make it go away.
I do acknowledge that inequality can have unforeseen consequences and worth talking about and tackling today but only by considering the right tradeoffs.
> Increasing taxes too much and there are no incentives to work and prosperity reduces.
> Your first mistake is thinking hard work matters.
If hard work doesn't matter, then why care what incentives are?
The world still values valuable work and that’s what we have to incentivize. Valuable means making things people need not personally working 20 hours a day.
What the Wikipedia search won't tell you is that the methodologies and poverty guidelines used in making some of these claims are rather questionable. While real progress has undeniably been made, the extent is greatly exaggerated:
https://www.project-syndicate.org/commentary/indian-governme...
The former from the coordination problem of extracting wealth but not fast enough that it solves the coordination problem for the labor class who, like you said, have strike first and revolt second as their battles of last resort.
The ownership class can voluntarily reduce wealth inequality, and they have before, but as history progresses and time marches on, so do the memories fade of what happens when they don't, pushing them closer and closer to options they don't want to admit work.
High value product work remains safe from AI automation for now, but it was also safe from offshoring so long as domestic capacity existed.
It may just be incompetence in large organisations though. Things get outsourced because nobody wants to manage them.
That would explain a lot, actually. If so, it'll be interesting to see what happens to the overall software economy when that revenue stream dries up. My wife grew up in Mexico on a border town and told me that the nightclubs in her town were amazing; when she moved to the US, she was disappointed by how drab the nightclubs here were. Later she found out that the border town nightclubs were so extravagant because they were laundering drug money. When they cracked down on the money laundering, the nightclubs reverted back to their natural "drab" state of relying on actual customers to pay the bills.
I'm sorry Dave. I can't answer that.what this person has started is being done by WITCH companies at largest scale and most fraudulent way possible
It also improves brand reputation by actually paying attention to what customers are saying and responding in a timely manner, with expert-level knowledge, unlike typical customer service reps.
I've used LLMs to help me fix Windows issues using pretty advanced methods, that MS employees would have just told me to either re-install Windows or send them the laptop and pay $hundreds.
But I can’t imagine ever calling tech support for help unless it is more than troubleshooting and I need them to actually do something in their system or it’s a hardware problem where I need a replacement.
Over the past 3 years of calling support any service or infrastructure (bank, health insurance, doctor, wtv), over like 90% of my requests were things only solvable via customer support or escalation.
I only keep track because I document when I didn't need support into a list of "phone hacks" (like press this sequence of buttons when calling this provider).
Most recently, I went to an urgent care facility a few weekends ago, and they keep submitting claims to the arm, of my insurance, that is officed in a different state instead of my proper state.
I want:
> Respond with terminal command to do X
>> `complex terminal command code block`
> oh we need to run that on all such and such files
>> script.py
Yes, that's literally how you learn things. I can't understand how anyone on this forum thinks otherwise. Hackers are supposed to be people who thrive in unknown contexts, who thirst for knowledge of how things work. What you are suggesting is brain atrophy. It's the death of knowledge for profit and productivity. Fuck all of that.
They might be boring but they are nonetheless foundational
In that case I'm really not sure why professional software developers should be interested in your opinion on the technology at all
That is because search is still mostly stuck in ~2003. But now ask the exact same thing of an LLM and it will generally be able to provide useful links. There's just so much information out there, but search engines just suck because they lack any sort of meaningful natural language parsing. LLMs provide that.
(Might be a naïve question, I'm at the edge of my understanding)
LLMs inherently would introduce the possibility of hallucinations, but just using the vectors to match documents wouldn't, right?
Yes, this is how all the new dev documentation sites work now days, with their much improved searches. :-D
Why invest in making users more savvy when you can dumb down everything to 5 year old level eh
if the service offered is "support" then why is a phone call less acceptable than reading documentation?
I love a good AI to help search through large documentation basis for the particular issue I'm experiencing. But it is clear when the problem I am having is outside of the AI's sometimes infantile ability to understand, and I need the ability to bypass it.
Cancel account- have them call someone.
Withdraw too much - make it a phone call.
Change their last name? - that would overwhelm our software, let’s have our operator do that after they call in.
Etc.
That doesn't make much sense. Either your system can handle it or it can't. Putting a support agent in front isn't going to change that.
The business undoubtedly did a crude cost/benefit analysis where the cost to expose and maintain that public interface vastly outstrips the cost for the few people that have to call in and change their name.
It’s not exactly a difficult design problem. Unless I’m missing some thing.
Haha, not likely.
In reality the org is so drowned in technical debt that changing the last name involves manually running 3 different scripts that hit three different DBs directly and the estimate from the 3rd party dev consultancy that maintains the mess for how long it'd take to make a safe publicly usable endpoint is somewhere between 2 years and forever.
Perhaps there is a group that isn’t served by legacy ui discovery methods and it’s great for them, but 100% of chat bots I’ve interacted with have damaged brand reputation for me.
The trouble is when they gatekeep you from saying "I know what I'm doing, let me talk to someone"
All my interactions with any AI support so far is repeatedly saying "call human" until it calls human
Customer support is when all the documentation already failed and you need a human.
At $dayjob our customers are nontechnical so they don't always know what to search for, so the LLM/RAG approach can be quite handy.
It answers about 2/3 of incoming questions, can escalate to the humans as needed, and scales great.
I.e. AI isn't allowed to offer me a refund because my order never arrived. For that, I have to spend 20 minutes on the phone with Mike from India.
99% seems like a pulled-out-of-your-butt number and hyperbolic, but, yes, there's clearly a non-trivial percentage of customer support that's absolutely terrible.
Please keep in mind, though, that a lot of customer support by monopolies is intended to be terrible.
AI seems like a dream for some of these companies to offer even worse customer service, though.
Where customer support is actually important or it's a competitive market, you tend to have relatively decent customer support - for example, my bank's support is far from perfect, but it's leaps and bounds better than AT&T or Comcast.
I don't agree. AI support is as useless as real customer support. But it is more polite, calm, with clear voice, etc. Much better, isn't it?
AI is not better than a good customer service team, or even an above-average one. It is better than a broken customer service team, however. As others have noted, 99% is hyperbolic BS.
i have yet to experience this. unfortunately i fear it's the best i can hope for, and i worry for those in support positions.
I immediately hop on customer service chat to ask for a refund. I was surprised to be talking to an LLM rather than a human, but go ahead and explain what happened and state I want the transaction for the subscription canceled. It offers to cancel the subscription at the end of the 30-day subscription. I decline, noting I want a refund for the subscription I didn't intend to take. It repeats it can cancel the subscription at the end of 30-day subscription. I ask for human. It repeats. I ask for human again. It repeats. I disconnect.
Amazon knows what it's doing.
If Amazon wanted to give you the ability to get a refund for unused Prime benefits, it would allow the AI to do it, or even give you a button to do it yourself.
They don't trust the LLM so they cripple what it can do, would be the generous interpretation. I actually think they're intentionally crippling the LLMs' access to accounts, though, to reduce their spend not on CSRs, but on CSR actions for, for example, refunds, where the LLM becomes an excuse for the change; they can hide behind what they'll call technical issues or teething pains.
If it was the former, they would help you when you esclated it. So I think they are just becoming more greedy.
At my job, thanks to AI, we managed to rewrite one of our boxed vendor tools we were dissatisfied with, to an in-house solution.
I'm sure the company we were ordering from misses the revenue. The SaaS industry is full of products whose value proposition is 'it's cheaper to buy the product from us than hire a guy who handles it in house'
There are projects I lead now that I would have at least needed one or maybe two junior devs to do the grunt work after I have very carefully specified requirements (which I would have to do anyway) and diagrams and now ChatGPT can do the work for me.
That’s never been the case before and I’ve personally gone from programming in assembly, to C, to higher level languages and on the hardware side, personally managing the build out of a data center that had an entire room dedicated to a SAN with a whopping 3TB of storage to being able to do the same with a yaml/HCL file.
I remember Bill Gates once said (sometime in the 2000s) that his biggest gripe, is during his decades in the software industry, despite dramatic improvements in computing power and software tools, there has only been a modest increase in productivity.
I started out programming in C for DOS, and once you got used to how things were done, you were just as productive.
The stuff frameworks and other stuff help with, is 50% of the job at max, which means due to Amdahls law, productivity can at most double.
In fact, I'd argue productivity actually got reduced (comparing my output now, vs back then). I blame this on 2 factors:
- Distractions, it's so easy to d*ck around the internet, instead of doing what you need to do. I have a ton of my old SVN/CVS repos, and the amount of progress I made was quite respectable, even though I recall being quite lazy.
- Tooling actually got worse in many ways. I used to write programs that ran on the PC, you could debug those with breakpoints, look into the logs as txt, deployment consisted of zipping up the exe/uploading the firmware to the uC. Nowadays, you work with CI/CD, cloud, all sorts of infra stuff, debugging consists of logging and reading logs etc. I'm sure I'm not really more productive.
It worked out pretty well. Who knows how the software engineering landscape will change in 10 to 20 years?
I enjoyed Andrej Karpathy's talk about software in the era of AI.
You might well see more software profits if costs go down but less revenue. Depends on Jevon's paradox really
Like, you have the option of either using AWS RDS, or hiring a DBA and devops who administer your DB, and set up backups, replication and networking.
If AI (or a regular dev with the help of AI) can do that, it might mean your company decides to take the administrative burden on, and save the money.
A is producing something of value 100. That is complex to configure so B comes along and they say: Buy from me at 150 and you will get both the product and the configuration.
C comes and say: there are multiple products like this so I created a marketplace where I do some offering that in the end will cost you 160 but you can switch providers whenever you want.
Now I am a customer of C and I buy at 160: C gets 160 retains 10 but total revenue is 160 B gets 150 retains 50 but total revenue is 150 A gets the 100
Here is the question: How big is GDP in this case?
I think it is 160.
Now A adds LLM for about 4 extra that can do what B and C can (allegedly) removing the intermediaries and so now the GDP is 104.
Am I wrong with this?
The real GDP after accounting for cost of living has not changed much because while GDP has decreased, cost of living has also decreased (because A is now priced at 104 instead of 160).
But it’s even better because we have this extra money that we previously spent on C. In theory we will spend this extra money somewhere else and drive demand there. The workers put out of employment due to LLM will move to that sector to fulfill it.
Now the GDP not only increased but also cost of living reduced.
One example I mentioned is SaaS whose value proposition is that it's cheaper than to hire a dedicated guy to do it - if AI can do it, then that software has no more reason to exist.
Whether or not they end up losing business long term, it seems like a nice grift for as long as they can pull it off.
1) The quality of work produced being sub-par, with many instances of expensive, failed projects, leading to predictions of the death of offshoring.
2) Unwillingness of offshore teams to clarify or push back on requirements.
3) Local job displacement.
What people figured out soon enough was that offshoring was not as easy as "throwing some high-level requirements over the wall and getting back a fully functional project." Instead the industry realized that there needed to be technically-competent, business domain-savvy counterparts in the client company that would work closely with the offshore team, setting concrete and well-scoped milestones, establishing best practices, continously monitoring progress, providing guidance, removing blockers, and encouraging pushback on requirements, even revisiting them if needed.
Offshore teams on their part became culturally more comfortable with questioning requirements and engaging in 2-way discssions. Eventually offshore companies built up the business domain knowledge such that client companies could outsource higher- and higher-level work.
All successful outsourcing projects followed this model, and it spread quickly across the industry, which was why the predictions of the death of offshoring never materialized. In fact the practice has only continued to grow.
It's very interesting how much the same strategies apply to working with AI. A lot of the "how to code effectively with AI" articles basically offer the exact same advice.
On the job displacement side, however, the story may be very different.
With outsourcing, job displacement didn't turn out to be much of a concern because a) by delegating lower-level grunt work to offshore teams, local employees were then freed up to do higher-level, more innovative work; and b) until software has "eaten the whole world" the amount of new work is essentially unbounded.
With AI though, the job displacement could be much more real and long-lasting. The pace at which AI has improved is mind-boggling. Now the technically-competent, business-domain savvy expert could potentially get all the outsourced work done by themselves through an army of agents with very little human support, either local or offshore. Until the rest of the workforce can upskill themselves to the level of "technically-competent, business domain-savvy expert" their job is at risk.
"How many such roles does the world need?", and "How can junior employees get to that level without on-the-job experience?", are very open questions.
toomuchtodo•5mo ago
Original title "AI is already displacing these jobs" tweaked using context from first paragraph to be less clickbaity.
chihuahua•5mo ago
Davexon•5mo ago