I'm radically pro-immigrant. I want the smartest people from around the world to come work here. I want to unshackle them from their corporate sponsors. the current system is unfair to immigrants (who are bound like serfs to their workplace) and to citizens (who lose jobs because corporations prefer serfs.)
But, you know, company that has invested billions in AI selling the idea that AI will be replacing labour is not surprising.
AI has provided alot of unique value, but despite the countless headlines stoking fear of mass job loss, there still remains little substance to these claims of being able to automate anything but the most meanial of jobs. Until we can directly point the finger to AI as the cause of job loss numbers rising, and not other unrelated economic factors, this all just smells of fear mongering with a profit incentive.
These people universally hate labor.
The entire tech industry went on a firing binge when musk bought Twitter and fired everyone, and nazi salutes have done a bit to blunt his golden boy status in the exec ranks, not THAT much...
Now every CEO is trying to elbow their way to be the AI golden boy. It's worth tens of billions as musk had shown.
1. The existing codebase worse
2. The existing employees work more
3. The salaries stay flat
I’d argue that 1 is irrelevant provided the system continued to extract profit at the same or greater margin
Amazon lives and dies by not caring about #2 so that’s constant
#3 is desirable from Jassy and the boards perspective
Seems like exactly what I’d expect from Amazon
not at all! no assumptions made: Their website, technology stack, and SAAS platform is all garbage... yet, they persist in their success to make obscene amounts of money!
AI is for coding velocity like electricity is for better room lighting.
We haven't seen the nature of work after AI yet, we're still in a nascent phase. Consider every single white collar role, process, worfklow in your organization up for extreme disruption during this transition period, and it will take at least a decade to even begin to sort out.
Maybe startup development will significantly accelerate with AI churning out all the boilerplate to get your app started.
But enterprise development, where the app is already there and you’re building new features on top of a labyrinthian foundation, is a different beast. The hard part is sitting through planning meetings or untangling weird system dependencies, not churning out net new code. My two cents anyway.
Though AI will probably just proactively add features and open PRs and people can choose
Which I expect will be the gist of management consulting reports for the next decade.
If human decision-makers become the bottleneck... eventually that will be reengineered.
I'm fascinated to imagine what change control will need to look like in a majority-AI scenario. Expect there will be a lot more focus on TDD.
I don’t think LLMs are particularly smart, or capable of, or will definitely replace humans at anything, or if they’ll lead to better work. But I can already tell that their inherent lack of an ego DO accelerate things at enterprises, for the simple reason that the self-imposed roadblocks above stop happening
We are also explicitly NOT allowed to make any code changes that aren’t part of a story that our product owner has approved and prioritized.
The result is that we scrape together some stories to work on every sprint, but if we finish it early, we quickly run into red tape and circular conversations with other “decision makers” who need to tell us what we’re allowed to do before we actually do anything.
It’s fairly maddening. The whole org is hamstrung by a few workaholic individuals who control decision making for several teams and are chronically unavailable as a result.
I’ve seen this sort of thing happen at other big enterprises too but my current situation is perhaps an extreme example of dysfunction. Point being, when an org gets tangled up like this, LLMs aren’t gonna save it :)
I’ve already witnessed a certain big tech that started to move much faster by removing TPMs & EMs across the board, even without LLMs to “replace” them. With LLMs, you need even fewer layers. Then eventually fewer middle-of-business decision makers. In your example, it’s entirely possible that the function of making those components could be entirely subsumed by a single AI bot. That’s starting to happen a lot in the devops space already.
All that said, I doubt your business would benefit from moving faster anyway - most businesses don’t actually need to move faster. I highly recommend the “Bullshit Jobs” book, on this matter. Businesses will just need fewer and fewer people
The upside is that both of these things are the kind of tasks that are probably good to give to AI. I've always got little UI bugs that bother me every time I use our application but don't actually break anything and thus won't impact revenue and never get done.
I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Both of those cases feel like places where AI probably gets the job done.
So, to clarify – developers want to make improvements to the codebase, and you want to give that work to AI? Have you never been in the shoes of making an improvement or a suggestion for a project that you want to work on, seeing it given to somebody else, and then being assigned just more slog that you don't want to do?
I mean, I'm no PM, but that certainly seems like a way to kill team morale, if nothing else.
> I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Blows my mind to think that those are the things you want to give to AI. I'd quit.
The ability to untangle old bad code and make bigger broader plans for a codebase is precisely where you need human developers the most.
Everybody's job is to serve the company priorities. Engineers don't get to pick the tasks they want to do because they're getting paid to be there. I also have spent lots of time doing things I'd rather not do, because that's the nature of a job (plus a pile of stock options incentivizes me).
Better to have those tasks done by AI than not at all.
I wasn't clear, but to get that AI to admit that it has made an error and getting it to actually correct its error is like trying to put a round peg into a square hole. It will take the blame and continue as if nothing needs to change, no matter what prompts you send it.
Amazon has a document writing culture, all of those documents will be written by AI. People have built careers on writing documents. Same with operations, its all about audit logs. Internally, there are MCPs that have already automated TPMs/PMs/Oncall/maintenance coding. Some orgs in AWS are 90% foreign, there is fear about losing visa status and going back, the automation is just beginning. Sonnet 4 felt like the first time MCPs could actually be used to automate work.
A region expansion scoping project in AWS that required detailed design and inspection of tens of code bases was done in a day, it would usually require two or three weeks of design work.
The automation is real, and the higher are ups are directly monitoring token usage in their org, and pushing senior engineers to increase Q/token usage metrics among low level engineers. Most orgs have a no backfill policy for engineers leaving, they are supplimenting staffing needs with indian contractors, the expectation being that fewer engineers will be needed in a years time.
pryelluw•7mo ago
Isn’t this their general approach since forever?
happytoexplain•7mo ago
dale_huevo•7mo ago
nitwit005•7mo ago
Somehow they want to act like they are making a shift, rather than say they were ahead of the trend.
chneu•7mo ago
The wording changes, the intention doesn't.
If they could pay you nothing they would.
goatlover•7mo ago
ethbr1•7mo ago
But I expect the increasing income stratification of the 10s+ is a harbinger that we're running out of high-paying jobs for the number of people who are qualified for them.
And the window is closing for countries to agree to something like a structural tax on AI with benefits going to society to address the ills.
Absent that: further stratification, more employee-less businesses, and not a great future
nitwit005•7mo ago
rsynnott•7mo ago
Expect this to repeat until the markets choose a new favourite thing (I'm betting on "quantum"; it's getting a lot of press lately and is nicely vague.)
tart-lemonade•7mo ago
droopyEyelids•7mo ago
827a•7mo ago
Really bad look and poor leadership from Jassy. There's a good way to frame adoption of AI, but this is not it.
usefulcat•7mo ago
For 6/17, the S&P 500 was down 0.84%, QQQ (Nasdaq stocks) was down 0.98% and AMZN was down 0.59%.
AMZN slightly outperformed the market today.
827a•7mo ago
usefulcat•7mo ago
In any case, my point was that objectively, AMZN suffered less today than many other stocks, including many other large cap tech (QQQ) and non-tech (S&P) stocks. Considering those facts, it seems like a stretch to claim "the market clearly thought it was strange as well".
827a•7mo ago
One cannot draw any conclusions about how an individual stock in the S&P 10 performs relative to the overall market, because of how correlated these companies are and how much their combined weight contributes to the overall market. Every company in the S&P 10 is a tech company, except Berkshire. They trade together, and how they trade impacts the entire S&P 500.
When Jassy says something, it impacts Google's stock. When it comes out that OpenAI might have to sue Microsoft, it impacts Amazon's stock. Why this happens only makes sense to wall street's HFT systems which, quite honestly, are likely closer at this point to ASI than OpenAI; albeit totally unintelligible in their motives and reasoning.
Amazon did not outperform the market. The market is Amazon. The S&P 10 is not 10 individual companies; its one company.