What he hopes for is to just reduce the number of people they employ. So the "more people doing other types of jobs" just makes the message more palatable.
Suppose all companies follow the suite who is going to buy their crap?
There's no way that paying for a bunch of employees that you don't need, just so you can have some customers, is going to make sense. Even if you're operating a company town, only a fraction of their income is going to be spent on your company's goods/services, so you'll never be able to recoup the wage that way.
It seems obvious that many white collar workers today will have to do something involving physical labor at some point in the future.
I expect I will be facing this in my mid 50s. Really not ideal timing.
It's all in Adam Smith and economic history.
Amazon is also way behind tech peers on AI. These sorts of puff PR pieces don’t do much to shake that reality.
https://www.reuters.com/business/retail-consumer/amazon-cons...
Fraction of that is Anthropic investment.
This take is getting old, and the story won't stick with folks till their desk is in a cardboard box...
Out of all the faangs, amz is the best positioned to remove staff and agentify the work they were doing. First, amz constantly churns the lower x%. They've been doing this for years now. They know what to count and who to fire. Second, amz has had everyone write a story about everything they do, day in, day out for years now. Change a lightbulb? Not without a story. Guess what you need for training LLMs? Yup, stories.
There are plenty of people writing stories and coordinating the writing of other stories. Those people will be the first out. It's never the top nor the bottom.
What tech peers is Amazon way behind on AI? Neither MSFT nor AAPL have their own models. FB has no path to model monetization. GOOG is unique but that's it, and AWS might be able to better capitalize on AWS enterprise customers. Amazon was way behind yes but at this point they are positioned well enough to execute.
Google you’ve already covered, and Apple despite its faults has been designing and producing AI-targeted hardware for a decade and has a much clearer story for integrating AI into its lineup.
AWS has a scattered mess of Q-branded services and a consistent track record of shipping garbage enterprise apps like Workmail, Chime, Workdocs, Cognito, and arguably Quicksight. Bedrock APIs are frequently behind in features from their parent vendors, and Bedrock as a whole isn’t better than thousands of LLM management platforms that have already sprung up.
I’ll never fully bet against Amazon as the far and away cloud market leader, but their existing AI position is flimsy and their increasingly hostile position towards their workforce reeks of desperation.
But would they ever admit such a failure in-front of shareholders who are still under the spell of "AI agents", "AGI", "ASI" bullshit?
I don't think so.
Discussed here: https://news.ycombinator.com/item?id=44289554
So glad I left that place.
CEOs can warn about AI replacing jobs until they're blue in the face, but people won't listen.
And when mass job losses finally arrive, people (including the CEOs) will be shocked and overwhelmed.
In fact, that is probably the reason that people unfortunately have learned not to listen. There's even a fable about it.
Although I do think that AI fits the pattern of "real big thing".
In general, cultural diffusion progresses in three stages: from insiders to money people to the public.
For example, great artists are recognized first by fellow artists and critics, then by art auctions, then by the broader public.
AI seems to be following a similar trajectory. AGI is felt first by insiders (AI researchers), then by money people (politicians and business leaders - we are here) then by the public (I'm guessing soon).
Your economic system is a joke
If AI truly comes in the current capitalistic system, there is no endgame. Ourobouros.
> Think of agents as software systems that use AI to perform tasks on behalf of users or other systems. Agents let you tell them what you want (often in natural language), and do things like scour the web (and various data sources) and summarize results, engage in deep research, write code, find anomalies, highlight interesting insights, translate language and code into other variants, and automate a lot of tasks that consume our time. There will be billions of these agents, across every company and in every imaginable field. There will also be agents that routinely do things for you outside of work, from shopping to travel to daily chores and tasks. Many of these agents have yet to be built, but make no mistake, they’re coming, and coming fast.
This is the same wishful thinking that AI companies are heavily marketing.
Nobody will want to use an "agent" that makes mistakes 60% of the time. Until the industry figures out a way to fix the problems that have plagued this technology since the beginning―which won't be solved by more compute, better data, or engineering hacks―this agentic future they've been promising is a pipe dream.
If you are working for a company that employs at least 1000 full-time engineers, I think you should consider joining a team where every project involves AI in some way, if you aren't already on one. Whether its owning AI tooling, or developing client features that use AI directly, or even just prototyping AI concepts that never launch. The safest roles like research and directly working on the models are out of reach for most people due to competition and position scarcity, but that's ok. There are so many positions downstream from those. The key thing to look for is to be in a position where your AI features can actually turn a profit, which might be rare, but not as difficult to get as an upstream role. But its still fine to be in a role that isn't profitable.
I think AI-adjacent roles will either be the first or last fulltime SWE jobs to go during the next tech downturn, which I don't think we are in yet. I am betting on the latter, because I think corporations will continue to reroute more and more funding towards AI all the way down. Even if the current AI cycle ends up as a failure, we are already in the sunk cost stages of commitment. There is no turning back without anything short of a total collapse.
Just a basic sniff test though - If AI enables developer productivity that would translate to more revenue, reduced costs, reduced risk, etc. The bottom line numbers would get better. With more resources available your next move is to decrease spending on more productivity enhancements or revenue opportunities? They don't want more revenue? Doesn't add up.
The better headline would be: "Amazon CEO Andy Jazzy faced with poor financial outlook tries to convince the public that downsizing is due to improvements in AI"
For example now I have a ton of graphs and interactive UI pages that interact with my code. Made everyone’s lives easier, but at least in my case it was not a dealbreaker not having these, and frankly nobody was willing to pay for them.
You can legitimately argue "far less to do with", but it's definitely not nothing. There are countless projects underway where AI will allow for 10% reductions with zero business impact in the short term, and 25-40% reductions (sometimes more) by 2030.
But these kinds of projections aren't unusual at all — if you use the Deep Research capabilities of modern models to build a list of public projections for your own research, you'll see similar estimates. These reports will generally use the framing of "efficiency gains", where AI will "free-up employees from drudgery to focus on higher-value work", but my intuition is that a future where all individual contributors are elevated to Director of Agentic Workflows is probably not the most likely outcome.
The model by MIT's Daron Acemoglu estimates that ~5% of U.S. tasks can be completely and profitably automated by AI within ten years.
It was expressly not a head-count forecast, and didn't attempt to quantify the headcount reduction that AI augmentation could enable.
Is this the MIT paper ? In this one the TFP is 0.55%
I understand all the theory but it can largely be condensed into - AI makes workforce more efficient so you need less people. But there are no good studies afaik that measure AI powered efficiency and surely nothing about how to model workforce reduction due to AI. I am curious what the science is behind these opinions.
The only logical explanation is that they don't have enough opportunities to utilize those people OR as I previously mentioned... their financials might look bad, and they are trying to make them look better so they don't take a hit in the markets.
Stonks go down - fast - when all those fired people stop buying, but that's a problem for the next CEO.
As you say, they could also expand. Or just fix the problems with the site.
But they don't have the imagination to do that.
Companies don't exist to benefit their employees (or their customers).
If that was true then the companies should never have been doing layoffs, as all these companies are generating tens of billions of dollars in revenue.
> The better headline would be: "Amazon CEO Andy Jazzy faced with poor financial outlook tries to convince the public that downsizing is due to improvements in AI"
This is assuming that companies have the capacity to keep increasing revenue by adding more workforce, which is just not true. At some point you hit diminishing returns with more workers. The same goes for Agent workers. To chase more revenue you need a lot more than just more SWEs and a lot of that is not currently similarly scalable.
So it really doesnt matter whats realistic. They want cheaper workers to live in fear.
There are only two reasons I can think not to. First, if AI can fully replace a human in a role. But it seems like we're a long way away from that. Second, if the added productivity leaves you with nothing to do. But we're in tech. There's always something new to do. If you're not doing new things as a company, you're getting replaced by those who are.
So it seems like a losing strategy to make your workforce cost reduction your primary concern when we could see the greatest workforce productivity gain in modern times.
More companies with smaller workforces would be better than fewer companies with larger workforces.
The headcount growth during COVID along with the return of offshoring with GCCs was driven by the intention to speed up delivery of products and initiatives.
There are some IR games being played, but the productivity gains are real - where you may have recruited a new grad or non-trad candidates, now you can reduce hiring significantly and overindex on hiring and better compensating more experienced new hires.
Roles, expectations, and responsibilities are also increasingly getting merged - PMs are expected to have the capabilities of junior PMMs, SEs, UX Designers, and Program Managers; and EMs and Principal Engineers are increasingly expected to have the capabilities of junior UX Designers, Program Managers, and PMs. This was already happening before ChatGPT (eg. The Amazon PM paradigm) but it's getting turbocharged now that just about every company has licenses for Cursor, Copilot, Glean Enterprise, and other similar tools.
yeah, that's a useful thing that a chatbot could do...in theory.
in practice, from the recent CMU study [0] of how actual LLMs perform on real-world tasks like this:
> For example, during the execution of one task, the agent cannot find the right person to ask questions on RocketChat. As a result, it then decides to create a shortcut solution by renaming another user to the name of the intended user.
0: https://arxiv.org/pdf/2412.14161 (pdf)
This article/comment isn't really the prompt, just a reminder of that it seems like a shtty place to put my funds and I'll soon be using AI to replace it anyway!
If AI is so useful that it can fully replace engineers or other humans, why aren’t products next level amazing?
If the barrier to entry for these high margin tech companies becomes so low that they no longer even need employees, isn’t the next step to compete on quality?
AI won’t fundamentally alter either of these facts.
People are out there building useful stuff with AI but they don't work at Amazon
It makes more sense that Amazon would continue to push AI where it's already being used successfully. Devs may benefit from finding solutions quicker with AI, but it's never made sense to me why that would affect productivity per head or change hiring/firing rates.
Put another way: there are never enough devs and they write a lot of shitty code. AI writes even shittier code, but in subtly different ways and can write it even faster helping the dev iterate to better code.
The result is basically no change anywhere except a modest increase in quality. This is equivalent to, but cheaper than going on an epic quest to find the good devs and overpay them. Why is this a bad thing for like 99% of people who write code? There's basically no impact on their pay or ease of finding a job.
I believe the business leaders are seriously considering about this -- i.e. not necessarily just as an excuse to RIF, but they probably believe in this. Whether it is going to be successful is irrelevant.
I'm eagerly waiting for someone to talk about AI integration experiments within FAANG. I'm surprised no one has talked about it yet -- maybe there is some kind of NDA or the experiments are still in early stages. Once the experiments are proved to be marginally successful, I bet the leaders are going to start some mass layoffs -- or maybe worse, if they are pressured by stock prices, to do that and see what happens before anything conclusive.
To any team who is integrating AI into your company's data or doc -- please STOP and don't do that. I'm not talking about USING AI, but INTEGRATING AI.
More money is spent at most of these companies coordinating work than actually doing work.
But the comments saying Claude can't replace some genius are irrelevant. The amount of SWEs at big tech itself is so high that law of averages dictate most people are not rockstars (and this is validated in my observations). Most SWEs just write internal RPC to internal RPC wrappers. I am seeing that everyone is relying a lot on these tools, and the new SWEs seem to utterly depend on them. HN users will always have some edge case pointed out but most of software is crud apps low scale (even big tech most internal tool is low scale) and these tools are definitely doing better than the median SWE I have encountered.
It's a pretty bland memo and a thinly veiled advertisement for AWS.
leptons•8h ago
rwmj•8h ago
leptons•6h ago
p0w3n3d•8h ago