Was good while it lasted though.
I think you could find 10,000 quotes from HN alone why SDEs were immune to labor market struggles that would need a union
Oh well, good luck everyone.
That said, I’m still sceptical it isn’t simply a reflection of an overproduction of engineers and a broader economic slowdown.
Not really. If it’s overproduction, the solution is tighter standards at universities (and students exercising more discretion around which programmes they enroll in). If it’s overproduction and/or outsourcing, the solutions include labour organisation and, under this administration, immigration curbs and possibly services tariffs.
Either way, if it’s not AI the trend isn’t secular—it should eventually revert. This isn’t a story of junior coding roles being fucked, but one of an unlucky (and possibly poorly planning and misinformed) cohort.
Software isn't eating the world. Software ate the world. New use cases have basically not worked out (metaverse!) or are actively harmful.
Either way, there are layoff provisions with union agreements.
AI is still used in Hollywood but nobody is proud of it. No movie director goes around quoting percentages of how many scenes were augmented by AI or how many lines in the script were written by ChatGPT.
Hell, they're even (successfully) pushing back against automated gates! [1]
[0] https://www.cnn.com/2024/10/02/business/dock-workers-strike-...
[1] https://www.npr.org/2024/10/03/nx-s1-5135597/striking-dockwo...
For a recent example:
> Volkswagen has an agreement with German unions, IG Metall, to implement over 35,000 job cuts in Germany by 2030 in a "socially responsible" way, following marathon talks in December 2024 that avoided immediate plant closures and compulsory layoffs, according to CNBC. The deal was a "Christmas miracle" after 70 hours of negotiations, aiming to save the company billions by reducing capacity and foregoing future wage increases, according to MSN and www.volkswagen-group.com.
Unionization kind of worked for mines and factories because the company was tied to a physical plant that couldn't easily be moved. But software can move around the world in milliseconds.
Similarly, a lot of non-cutting edge SW jobs will also leave the US as tooling becomes more standardized, and other nations upskill themselves to deliver similar value at less cost in exchange for USD.
Software development at its core can be done anywhere, anytime. Unionization would crank the offshoring that already happens into overdrive.
Better our children never have to work because the robots do everything and they inherited some ownership of the robots.
There are two possibilities:
a) This is a large scale administrative coordination problem
b) We don't need as many software engineers.
Under (a) unionizing just adds more administrators and exacerbates the problem, under (b) unions are ineffective and just shaft new grads or if they manage to be effective, kills your employer (and then no one has a job.)
You can't just administrate away reality. The reason SWEs don't have unions is because most of us (unlike blue collar labor) are intelligent enough to understand this. I think additionally there was something to be said about factory work where the workers really were fungible and it was capital intensive, software development is almost the polar opposite where there's no capital and the value is the theory the programmers have in their head making them a lot less fungible.
Finally we do have legal tools like the GPL which do actually give us a lot of negotiating power. If you work on GPL software you can actually just tell your employer "behave or we'll take our ball and leave" if they do something stupid.
Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.
In the meantime, memes aside, I'm not too worried about being completely automated away.
These models are extremely unreliable when unsupervised.
It doesn't feel like that will change fundamentally with just incrementally better training.
"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.
The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.
It's lossy compression at the core.
Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.
The "fundamental deficiency/performance ceiling" argument I don't buy at all.
We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.
Often the results will be great.
Sometimes the hallucinated details will not match the expectations.
I think this applies fundamentally to all of the LLM applications.
That's pretty much what we're experiencing currently. Two years ago code generation by LLMs was usually horrible. Now it's generally pretty good.
LLMs show it plain and clear: there's no magic in human intelligence. Abstract thinking is nothing but fancy computation. It can be implemented in math and executed on a GPU.
They do have the ability to fool people and exacerbate or cause mental problems.
Now you get can't around that this might not be the case.
You're like that beetle going extinct mating with beer bottles.
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
We've already found that LLMs implement the very same type of abstract thinking as humans do. Even with mechanistic interpretability being in the gutters, you can probe LLMs and find some of the concepts they think in.
But, of course, denying that is much less uncomfortable than the alternative. Another one falls victim to AI effect.
Any abstraction you're noticing in an LLM is likely just a plagiarized one
People have been arguing this is not the case for at least hundreds of years.
But I as a chess player can easily be replaced by a chess engine and I as a programmer might soon be replaceable by a next token predictor.
The only reason programmers think they can't be replaced by a next token predictor is that programmers don't work that way. But chess players don't work like a chess engine either.
It’s still horrible btw.
I'm not saying that LLMs will positively replace all programmers next year, I'm saying that there is a lot of uncertainty and that I don't want that uncertainty in my career.
Define "quality", you can make an image subjectively more visually pleasing but you can't recover data that wasn't there in the first place
Like, the grill of a car. If we know the make and year, we can add detail with each zoom by filling in from external sources
Model collapse is something demonstrated when models are recursively trained largely or entirely on their own output. Given most training data is still generated or edited by humans or synthetic, I'm not entirely certain why one would expect to see evidence of model collapse happening right now, but to dismiss it as something that can't happen in the real world seems a bit premature.
I think the labs will be crushed by the exponent on their costs faster white-collar work will be crushed by the 5% improvement exponent.
> It doesn't feel like that will change fundamentally with just incrementally better training.
I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.
Also, LLMs don't need to be better programmers than you are, they only need to be good enough.
There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time.
I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.
> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.
The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".
Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.
A grandmaster chess playing ai is not better at driving a car than my calculator from the 90s.
I'm arguing that the category of the problem matters a lot.
Chess is, compared to self-driving cars and (in my opinion) programming, very limited in its rules, the fixed board size and the lack of "fog of war".
Your stance was the widely held stance not just on hacker news but also by the leading proponents of ai when chatgpt was first launched. A lot of people thought the hallucination aspect is something that simply can't be overcome. That LLMs were nothing but glorified stochastic parrots.
Well, things have changed quite dramatically lately. AI could plateau. But the pace at which it is improving is pretty scary.
Regardless of real "intelligence" or not.. the current reality is that AI can already do quite a lot of traditional software work. This wasn't even remotely true if if you were to go 6 months back.
Well yes , now we know they make kids kill themselves.
I think we've all fooled ourselves like this beetle
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
for thousands of years up until 2020 anything that conversed with us could safely be assumed to be another sentient/intelligent being.
No we have something that does that, but is neither sentient or intelligent, just a (complex)deterministic mechanism.
I think I have a pretty good idea of what AI can do for software engineering, because I use it for that nearly every day and I experiment with different models and IDEs.
The way that has worked for me is to make prompts very specific, to the point where the prompt itself would not be comprehensible to someone who's not in the field.
If you sat a rando with no CS background in front of Cursor, Windsurf or Claude code, what do you suppose would happen?
It seems really doubtful to me that overcoming that gap is "just more training", because it would require a qualitatively different sort of product.
And even if we came to a point where no technical knowledge of how software actually works was required, you would still need to be precise about the business logic in natural language. Now you're writing computer code in natural language that will read like legalese. At that point you've just invented a new programming language.
Now maybe you're thinking, I'll just prompt it with all my email, all my docs, everything I have for context and just ask it to please make my boss happy.
But the level of integrative intelligence, combined with specialized world knowledge required for that task is really very far away from what current models can do.
The most powerful way that I've found to conceptualize what LLMs do is that they execute routines from huge learnt banks of programs that re-combine stored textual information along common patterns.
They're cut and paste engines where the recombination rules are potentially quite complex programs learnt from data.
This view fits well with the strengths and weaknesses of LLMs - they are good at combining two well understood solutions into something new, even if vaguely described.
But they are quite bad at abstracting textual information into a more fundamental model of program and world state and reasoning at that level.
I strongly suspect this is intrinsic to their training, because doing this is simply not required to complete the vast majority of text that could realistically have ended up in training databases.
Executing a sophisticated cut&paste scheme is in some ways just too effective; the technical challenge is how do you pose a training problem to force a model to learn beyond that.
Chess was once thought to require general intelligence. Then computing power became cheap enough that using raw compute made computers better than humans. Computers didn't play chess in a very human-like way and there were a few years where you could still beat a computer by playing to its weaknesses. Now you'll never beat a computer at chess ever again.
Similarly, many software engineers think that writing software requires general intelligence. Then computing power became cheap enough that training LLMs became possible. Sure, LLMs don't think in a very human-like way: There are some tasks that are trivial for humans and where LLMs struggle but LLMs also outcompete your average software engineer in many other tasks. It's still possible to win against an LLM in an intelligence-off by playing to its weaknesses.
It doesn't matter that computers don't have general intelligence when they use raw compute to crush you in chess. And it won't matter that computers don't have general intelligence when they use raw compute to crush you at programming.
The proof that software development requires general intelligence is on you. I think the stuff most software engineers do daily doesn't. And I think LLMs will get continously better at it.
I certainly don't feel comfortable betting my professional future on software development for the coming decades.
The goal of the industry has always been self-replacement. If you can't automate at least part of what you're working on you can't grow.
... unfortunately, as with many things, this meshes badly with capitalism when the question of "how do you justify your existence to society" comes up. Hypothetically, automating software engineering could lead to the largest open-source explosion in the history of the practice by freeing up software engineers to do something else instead of toil in the database mines... But in practice, we'll probably have to get barista jobs to make ends meet instead.
AI lawyers? Many years away.
AI civil engineers? Same thing, there is a PE exam that protects them.
I have a feeling language models will be good at virtually every "sit at a desk" job in a virtually identical capacity, it's just the act of plugging an AI into these roles is non-obvious.
Like every business was impacted by the Internet equally, the early applications were just an artifact of what was an easy business decision.. e.g. it was easier to start a dotcom than to migrate a traditional corporate process.
What we will see here with AI is not the immediate replacement of jobs, but the disruption of markets with offerings that human labor simply can't out-compete.
I don't know. It seems pretty friendly to automation to me.
When was the last time you wrote assembly? When was the last time you had map memory? Think about blitting memory to a screen buffer to draw a square on a screen? Schedule processes and threads?
These are things that I routinely did as a junior engineer writing software a long time ago. Most people at that time did. For the most part, the computer does them all now. People still do them, but only when it really counts and applications are niche.
Think about how large code bases are now and how complicated software systems are. How many layers they have. Complexity on this scale was unthinkable not so long ago.
It's all possible because the computer manages much of the complexity through various forms of automation.
Expect more automation. Maybe LLMs are the vehicle that delivers it, maybe not. But more automation in software is the rule, not the exception.
The labor cost of implementing a given feature is going to dramatically drop. Jevons Paradox paradox will hopefully still mean that the labor pool will just be used to create '10x' the output (or whatever the number actually is).
If the cost of a line of code / feature / app becomes basically '0', will we still hit a limit in terms of how much software can be consumed? Or do consumers have an infinite hunger for new software? It feels like the answer has to be 'it's finite'. We have a limited attention span of (say) 8hrs/person * 8 billion.
Inevitably, people remember that the hard part of programming isn't so much the code as it is putting requirements into maintainable code that can respond to future requirements.
LLMs basically only automate the easiest part of the job today. Time will tell if they get better, but my money is on me fixing people's broken LLM generated businesses rather than being replaced by one.
I don't think LLMs alone are going to get there. They might be a key component in a more powerful system, but they might also be a very impressive dead end.
There is an unimaginable amount of freely accessible training data out there. There aren't for example many transcribed therapy sessions out there.
The only thing that matters about software is that it's cheap and it sort of works. Low-quality software is already common. Bugs aren't usually catastrophic in the way structural failures would be.
Software engineers are expensive compared to many other white-collar workers.
Software engineering is completely unregulated and there is no union or lobby for software engineers. The second an LLM becomes good enough to replace you, you're gone.
Many other "sit at desk" jobs have at least some tasks that can't be done on a computer.
Software engineering feels like an extremely uncertain career right now.
You know even the CEOs are backtracking on that nonsense right?
Unrelated to the discussion, but I love these kinds of backup plans. I've found that most guys I talk to have one. Just a few days ago a guy was telling me that, if his beloved wife ever divorces him, then he'd move to a tropical island and become a coconut seller.
(My personal plan: find a small town in the Sonoran Desert that has a good library, dig a hole under a nice big Saguaro cactus, then live out my days reading library books in my cool and shady cave.)
Is there a visa for that? Doesn't seem feasible unless he lives in a country that has a tropical island already.
Also there are different metrics that are relevant like dollar count vs pure headcount. Cost cutting targets dollars. E.g. entry level developers are still expensive compared to other jobs.
Its not really an intelligence thing. You could have the most intelligent agent, but if the structural incentives for that agent are for example, "build and promote your own library for X for optimal career growth.", you would still have massive fragmentation. And under the current rent-seeking capitalist framework, this is a structural issue at every level. Firefox and Chrome? Multiple competing OSes? How many JS libraries? Now sure, maybe if everyone was perfectly intelligent _and_ perfectly trusting, then you could escape this.
* - Someone should maintain a walkback list to track these. I believe recent additions are Amodei of Anthropic and the CEOs of AWS and Salesforce. (Benioff of Salesforce, in February: "We're not going to hire any new engineers this year." Their careers page shows a pivot from that position.)
Seems like the capabilities of current systems map onto "the kind of labor that gets offshored" quite well. Some of the jobs that would get offloaded to India now get offloaded to Anthropic's datacenters instead.
The H1B pipeline has not decreased at all whereas millions of American workers have been laid off.
Happening simultaneously sadly.
IT help was outsourced to India years ago. I expect them to be replaced with AI the minute their government stops handing the firm big contracts because I’ve never spoken to anyone from that group who was actually better than a chat bot.
I wonder how much this actually matters? I understand that for an auditor, having a quality reputation matters. But if all audits from all firms are bad, how much would the world economy suffer?
Likewise for the legal profession, if all judges made twice the number of mistakes, how much would the world suffer?
At the end of the day it is a question of convenience/standards, if GAAP didn't exist maybe firms could use a modified accrual standard that is wholly compliant with tax reporting and that's it.
Is this hyperbole? It seems like the real question being asked here is "would the world be worse off without deterministic checks and balances", which I think most people would agree is true, no?
From that perspective, lowering the quality of something that is already non-rigourous might not have any perceivable effect. It’s only a problem if public perception lowers, but that’s a marketing issue that the big-4 already have a handle on.
The all-in on AI shows a lack of imagination around innovation.
https://www.bloomberg.com/news/newsletters/2024-05-30/tough-...
So, that doesn't seem like a likely culprit unless you have some convincing evidence.
I expect that other areas like accounting that use outsourcing are going to see similar effects in a few years.
Language barriers, culture, and knowledge are some of the biggest challenges to overcome for offshoring. AI potentially solves many of those challenges
Isn't it exactly the opposite?
Language barriers: LLMs are language models and all of the major ones are built in English, speaking that language fluently is surely a prerequisite to interacting with them efficiently?
Knowledge: famously LLMs "know" nothing and are making things up all of the time and sometimes approximate "knowledge"
Knowledge: True to an extent, but my assumption here is that it would be used to fill in gaps or correct misunderstandings. Not wholesale doing my job. At least that’s often how I use it
That said, I have one ESL on my team who uses LLMs a lot like that and it's fine so who knows.
Google Translate is relatively awful. I have an intern now who barely speaks my native language but very bad English so weve been using it all the time, and its always spot on, even for phrases that dont translate directly
I bet I can do a good job communicating with you without speaking a common language.
It was absolutely flawless, to the level of accentuations and little quirks that no tool before even came close.
Parent is plain wrong and doesnt have a clue... thats what happens when folks skip on learn foreign languages, the most important thing for life you can learn at school. Actively using multiple languages literally increases brain plasticity, much better than running ie sudoku or similar brain teasers endlessly
Do those people really believe they're the most intellectually superior to the rest of the world? If a job can be done purely remotely, what stops the employer from hiring someone who lives in a cheaper place?
A mid-size US tech company I know well went fully remote after a lot of insistence from the workforce, prior to the pandemic they were fully in office.
Soon enough they started hiring remotely from EU, and now the vast majority of their technical folks are from there. The only US workers remaining are mostly GTM/sales. I personally heard the founder saying “why should we pay US comp when we can get extremely good talent in EU for less than half the cost”. EU workers, on average, also tend to not switch job as frequently, so that’s a further advantage for the company.
Once you adapt to remote-only, you can scoop some amazing talent in Poland/Ukraine/Serbia/etc for $50k a year.
I'm not talking about rural Chinese villages whose name you can't pronounce. Or the stereotypical Indian call centers. I'm talking about highly educated programmers who can communicate fluently in English, in cities like Beijing or Munich. If people in SV know how (relatively) little their counterparts make in these places, they'd be much more opposed to remote work.
And that was before LLM. Today practically the entire planet can write passable English.
It may or may not work but it can crater 70% of IT/software department by 2027 as per their plan.
On the other side, we have started to find that the value of outsourcing to very low cost regions has completely disappeared.
I expect that the wages in eastern Europe will quickly rise in a way they never did in former outsourcing hotspots (India for example), because they are able to do similarly complex and quality work to westerners, and are now enabled by awesome translation tools.
The low quality for cheaper is now better served by the Artificial Indian.
In my experience, pre-2015 or so, offshoring was limited in its utility. Communication was a bitch because videoconferencing from everyday laptops wasn't quite there yet, and a lot of the favored offshoring centers like India had horrible time zone overlap with the US. And perhaps most importantly, companies as a whole weren't used to fully supporting remote colleagues.
Now, though, if I interact with the majority of my colleagues over Zoom/Teams/Meet anyway, what difference does it matter where they're sitting? I've worked with absolutely phenomenal developers from Argentina, Poland and Ukraine, and there was basically no difference logistically between working with them and American colleagues. Even the folks in Eastern Europe shifted their day slightly later so that we would get about 4 hours of overlap time, which was plenty of time for communication and collaboration, and IMO made folks even more productive because it naturally enforced "collaboration hours" vs. "heads down hours".
I understand why people like remote, but I agree, US devs pushing for remote should understand they're going to be competing against folks making less than half their salaries.
Timezone overlap is also a big one.
I have had issues with Indian outsourcers like you say (lots of churn, time zone hell, a culture of pretending everything is fine until release day and then saying "sorry, nothing works", etc.), but it's a bigger world now, and there are still lots of folks making half of US dev salaries where none of these problems exist.
Given, outsourcing is probably going to be hit-or-miss regardless of who’s doing it.
Even a mid-size tech company I worked for had over a dozen small offices around the world to collect as many qualified developers as they could. They had some remote work too.
Still hired a lot of Americans. Thinking that remote work will be the end of American workers has been the driving force behind outsourcing pushes for decades, but it hasn’t worked that way.
The difference is that back then the project lead could explore outsourcing certain roles to India, EE and LatAm, while today the VP can explore outsourcing the project lead roles to those countries. These countries have built up their own native tech talent, many of whom already bring more to the table than the typical American - they work longer hours, for cheaper, and often bring a lot more experience. I've seen companies who only run sales teams with Americans, with the rest of the workforce being shipped out.
Notably, India already has nearly 2000 GCCs (Global Capability Centers, mega complexes of offices for foreign companies) set up, with that number only projected to increase as more mid-market firms expand. While many of them are just back offices, some of them, like Walmart's GCC, is the entire tech division - the CTO remains in the US, while the entire software team is in India. While earlier the Indian team would have had to adjust their timings to USA's, now quite a few US-based employees have had to adjust their timings to India's.
I've worked with remote workers from around the world. Let me preface by saying there are of course exceptions but:
What I've found is that most often Americans exhibit self-starting and creativity. What I mean by that is non-us workers are great if you give them a specific task, even a really hard task.
But if you give them a nebulous problem, or worse, a business outcome, they tend to perform much more poorly. And I rarely see non-americans say something like "I think our customers would like it if we added X to the product, can I work on that?".
I don't think it's because Americans are better at this -- I think it's cultural. America has a much higher risk tolerance than the rest of the world. Failing is considered a good thing in the USA. And the USA is much more entrepreneurial than the rest of the world.
These two things combined create a culture difference that makes a business difference.
Additionally, what I've found is that the exceptions tend to move here because their risk taking is much more acceptable here (or they are risk takers willing to move across the world, hard to say which way the causation goes).
...which is a lot like the LLMs! Maybe the skillset required to manage non-US workers is the same as for managing ChatGPT 6o, but the latter scales better.
I'm going to counterpoint somewhat. I think those attributes are evenly spread into all countries, but equally I think they are uncommon in all countries.
I don't live in the US. I have traveled there and elsewhere. I would agree that there are lots of cultural differences between places, even places as nominally similar as say the UK, Australia and the US.
Of course who you interact with in various places matters. If you go to India and visit a remote-programming-company you'll meet a specific kind of person, one well suited to providing the services they offer.
Dig a bit deeper elsewhere and you'll find some very bright, very creative, engineers in every culture. In some cases those folk are doing remote work for US companies. In a few cases they're building the software (creatively and all) that the US company is selling.
In countries that are isolated for one or other reason creativity thrives. Israel, South Africa, Russia, all have (or had) exceptional engineering abilities developed because international support was withheld.
Yes, it is hard to find good talent. It is hard to develop and nurture it. But it exists everywhere. And more and more I'm seeing folks outside the US take American jobs, precisely because American workers are so keen to explain how portable those jobs are.
I understand that the American psyche is built on exceptionalism. And that does exist in some areas. But unfortunately it also acts as a filter blinding you to both exceptionalism elsewhere and inferiority at home. By the time you realise someone else has the edge, it's too late. We've seen this in industry after industry. Programing is no different.
I understand also that shooting the messenger is easier than absorbing the message. Let the down-voting begin.
The data does not support your statement. From a startup report just four days ago:
The United States alone generates 46.6% of all startup activity worldwide, nearly half of the global total. Together with China (9.2%), the United Kingdom (5.6%), and India (5%), these four countries account for 66.4% of the absolute global startup activity.
I will give you that Israel in particular has a strong risk taking culture, as does Singapore and Estonia. And there are a lot of startups coming out of there.
But overall the US has way more risk taking.
And like I said at the very beginning, there are of course exceptions. Yes, every culture has some brilliant risk takers. But at least until recently, many of them came to the USA after they got successful.
America is unique in way it businessmen tend to think that creating a business is the only way to be creative.
And incidentally, post was about employee creativity.
I think if you add the US to the list this theory disappears. It's more the frontier/self reliant/entrepreneurial attitude that I think makes the difference.
Isn't that mostly a function of how incentives are aligned? I had a job with a lot of outsourcing to India. The Indians were given specific bits of code to write. They didn't even know how their code fit into the application.
Their entire incentive structure was geared toward getting them to write those bits of code as quickly as possible, finish, and take another task. There just wasn't any room for "self-starting and creativity".
I have a feeling if the entire application had been moved to India things would have been different.
Interestingly the biggest exceptions were ones that had at some point lived and worked in the USA, and then had returned to their home country for some reason or another.
> I have a feeling if the entire application had been moved to India things would have been different.
I had direct experience with this. We had an office of full time employees in India tasked with a project, but I still had to hand hold them through most of the key decisions (which I didn't have to do with the US based teams nearly as much).
America is one of the most risk averse countries in the world, seriously. Americans are constantly scared - of loosing job, of physical injury, of everything and everywhere.
> Failing is considered a good thing in the USA
America punishes failure pretty hard. Some peoples failures are ignored, but most peoples failures are punished in pretty significant ways.
They agree with me.
I mean come on, how do you expect people to interpret this paragraph? I can only assume you are trolling, so I'm done here.
My experience is ANY delegation incurs a big loss in agency. I want to create a startup -> my employees are much less invested than I am. My remote (French) employees are even less invested. My Ukrainian employees are completely passive and I fired them. The more the distance, the less invested, the more passive.
It’s tempting to attribute this to your country’s qualities, but my experience is every country is a mixed bag.
I wonder how many devs have been sacked for going out of their way and making stuff nobody in business asked for, or perhaps that broke something along the way and ended up being a net negative: in the EU vs US and other parts of the world.
Might be loosely related to how much money the company has to burn and the nature of their work (e.g. probably not looked well upon in consulting where you have to convince clients to pay for whatever you've made), as well as how popular each type of work is in each part of the world.
That is, an external worker (and I'm a consultant, I know) gets paid per hour, if the company goes under for whatever reason they just move on to the next assignment, while an internal employee leans more on their job.
Anyway that's just a theory. I'm a "consultant" which is just a fancy word for a temp / hired hand, and I'm somewhere in the middle in that I will think along with the company and propose improvements, but at the same time have lower risk and much less attachment to the companies I work for.
I don't think it's cultural per se. As an extreme example, the CEOs of Google and Microsoft were both born and raised in India.
I've experienced both. Working with offshore employees and full time employees who happened to be in foreign countries. It was a similar experience with both, the exception being the ones that had previously lived and worked in the US.
> I don't think it's cultural per se. As an extreme example, the CEOs of Google and Microsoft were both born and raised in India.
Sundar Pichai moved to the US when he was in college. His entire working career and a bunch of his schooling was in the US.
Satya Nadella did the same.
As I said in my original reply, the ones who are more entrepreneurial or successful tend to move to the US (or at least used to).
The moment they can replace you for cheaper, they will, whether you insist on working remotely or not.
But alas, such a system is fundamentally impossible. Physics just won't allow it.
capitalism dictates that a capable remote person will not keep working for a single employer, as it will be a waste of time.
he/she will work for multiple employers (overemployed and such), maximizing earnings, thus it will constantly keep a gap between in-office and remote workers
In many larger companies also, nationstate threats and national security are a trending issue.
If you deal with a lot of PII, outsourcing your data processing pipelines to China isn't going to fly with Congress when you get subpoena'ed for a round with Hawley.
* lied about their capabilities/experience to get the job,
* failed to grok requirements through the language barrier,
* were unable to fix critical bugs in their own code base,
* committed buggy chatgpt output verbatim,
* and could not be held liable because their firm is effectively beyond the reach of the US legal system.
In a couple of projects I've seen a single US based developer replace an entire offshore team, deliver a superior result, and provide management with a much more responsive communication loop, in 1% of the billable hours. The difference in value is so stark that one client even fired the VP who'd lead the offshoring boondoggle.
Software talent is simply not as fungible as some MBAs would like to believe.
But at the same time, I doubt there is anything special about me or my US born coworkers. We aren't superior just because of the continent we live in. But offshore work is almost as a rule terrible quality done by people that are frustrating to work with. It doesn't make sense
Anyway highly competent and experienced folks will always thrive regardless of environment. Its the quiet rest that should be worried from multiple angles.
But I also believe the managers hiring offshore employees are fully aware of this. If they aren't then they're not very good managers and/or have no idea what they're doing.
The offshore people mainly work on SAP and legacy systems though; it turns out it's very hard to find willing or competent people in Europe that actually want to work on / with SAP. However, foreign workers have less qualms about learning stuff like that, since the money is really good.
but it's a bit like ikea: if you buy their cheapest stuff it will fall apart after a few months but their "expensive" lines are still far cheaper than the competition but the same quality.
you might think you're a solid mahogany table but at the end of the day you're probably the same table as being sold at ikea, just more expensive
A lot of that stems from a lack of job security. Stuff like suddenly being locked out of your work email/slack or being escorted out of company premises is largely unheard of in the rest of the world.
As a point of comparison: I'm a contractor based in a popular outsourcing destination. My contract is extended well over a month before it expires and I would need to do something particularly harmful to be let go just like that, as our client values continuity of services and will hold the agency accountable should that suffer.
Over here if a job listing mentions "US client" it typically means considerably more work for considerably more pay. Some go for that, others opt for more relaxed roles. I can't imagine having US jobs as the only option.
The majority of people in the company are still in the US, and even for the East coast, the timezones are just annoying to work around sometimes. Either I need to do late days, or they have to do uber early mornings/SUPER late days, don't even get me started on West coast where the hours basically never match. And I'm in the closest timezone I can be for the US.
And there's also a cultural aspect to it. I simply work differently to how the US bosses expect, because my employer has to respect worker's rights if they want to hire people in the EU unless they hire them as contractors (they still have many protections in that case though). I clock off at exactly 17:00, I never answer messages outside working hours, I don't do overtime or anything resembling it etc. And yes, they don't pay me the same as I would in the US, but it's really not that much lower, plus life is just cheaper, even here in the Netherlands. I get paid less relatively, but from what I can tell other that the people getting paid obscene amounts, my quality of life is higher than most of my US counterparts
I've noticed my US colleagues are much more willing to waste away their lives for their employer as well, even if there's no real expectation for them to do so, and the business obviously prefers those kind of employees over the ones like me.
So there's still plenty of reasons to keep hiring US-based devs, from cultural to logistical. Maybe you guys should work on getting some actual worker protections first, though...
They always ask “if a job can be done remote why not just hire a foreigner in a cheap place?” and never ask “if the foreigner was so good as the American engineer why wouldn’t they be getting paid the same as the American?”
It’s like they think companies are dumb and there is some undiscovered engineering arbitrage opportunity waiting to be tapped that will end the high 6 figure salaries of American software engineers forever.
And yet, since the 90s, software engineer salaries only go up. Millions of Indians flood the foreign markets, but American tech salaries only go up. Covid hits and everyone goes remote, but the salaries only go up. They always go up. American tech holds a supremacy over the world that you will likely not see the end of in your lifetime. There is so much money, so much risk taking, so much drive to dominate, other countries are generations behind.
But hey keep doing what you’re doing. Maybe you’ll save a couple bucks while your competitors gobble up the market with far better engineering talent. Not “equivalent” talent: better talent..
What I bet is happening under the covers is reprioritization of work, offshoring or both.
AI has been frequently used as an explanation for layoffs.
Before AI, layoffs would be a positive signal to investors, but they'd be demoralizing to staff and/or harm the brand.
Now, you can say, "Wow, we're so good at technology, we're eliminated ___ jobs!" and try to get the best of both worlds.
Plus slashing jobs like this keeps the plebs in line. They don’t like software engineers having the money and job security to raise a stink over things. They want drones terrified of losing everything.
https://esborogardius.substack.com/p/if-ai-doesnt-fire-you-i...
I disagree. My evidence is simple: just look at how the most recent generation of smartphones is being advertised. Look at the platforms like Base44 that are spamming their ads all over YouTube. The bet is diversified quite a bit, into the expectation that end users will (eventually) pay through the nose for AI-powered toys.
We seem to be in this illogical (delusional?) era where we are being told that AI is 'replacing' people in certain sectors or types of work (under the guise that AI is better or will soon be better than humans in these roles) yet those same areas seem to be getting worse?
- Customer service seems worse than ever as humans are replaced with "AI" that doesn't actually help customers more than 'website chatbots' did 20 years ago.
- Accounting was a field that was desperate for qualified humans before AI. My attempts to use AI for pretty much anything accounting related has had abysmal results.
- The general consensus around software development seems to be that while AI is lowering the barrier of entry to "producing code", the rate of production of tech debt and code that no one "owns" (understands) has exploded with yet-to-be-seen consequences.
^ This. (Tho I'm not sure about it being "general consensus".) Vibe code is the payday loan (or high-interest credit card) of tech debt. Demo-quality code has a way of making it into production. Now "everyone" can produce demos and PoCs. Companies that leverage AI as a powerful tool in the hands of experienced engineers may be able to iterate faster and increase quality, but I expect a sad majority to learn the hard way that there's no free lunch, and shipping something you don't understand is a recipe for disaster.
For example, cashiers. There are still many people spending their lives dragging items over a scanner, reading a number from a screen, holding out their hand for the customer to put money in, and then sorting the coins into boxes.
How hard can it be to automate that?
The one I'm desperately waiting for is serverless restaurants—food halls already do it but I want it everywhere. Just let me sit down, put an order into the kitchen, pick it up myself. I promise I can walk 20 feet and fill my own drink cup.
Call it what you like but replacing the work of humans one for one is difficult and usually not necessary. Reformulating the problem to one that machines can solve is basically the whole game. You don't need a robot front desk worker to greet you, you just need a tablet to do your check in.
[edit] Aldi did automate the management of getting shoppers to do that work, because there’s not a person standing there taking and handing out quarters, but (very simple) machines. Without those machines they might need a person, so that hypothetical role (the existence of which might make the whole scheme uneconomical) is automated. But they didn’t automate cart return, all that work’s still being done by people.
But it's good if both are available, as apparently there will be customers for both.
And I think the entire mid and low range restaurants could replace servers with a tablet and people would be happier. I'm not sure how it doesn't make more money for the restaurant too, making it so easy to order more during a meal.
And if so, why can't we detect it via camera + AI?
No thanks.
I'm sure you can find videos of thefts in San Francisco if you need a visual demonstration. No cashier is going to jump in front of someone to stop a theft.
If there's no cashier and you're doing it yourself, a whole lot more people will "forget" to scan a couple items, and that adds up.
You're doing the all or nothing fallacy. The fact that a cashier does not prevent all thefts does not mean a cashier does NOTHING for theft.
Yes, for one thing, it ignores that a very large share of retail theft is insider theft, and that cash handling positions are the largest portion of that.
Cashiers absolutely do something for theft.
Self checkout has been a thing for ages. Heck in Japan the 711s have cashiers but you put the money into a machine that counts and distributes change for them.
Supermarkets are actually getting rid of self checkouts due to crime. Surprise surprise, having less visible "supervision" in a store results in more shoplifting than having employees who won't stop it anyway.
I can go to Safeway or the smaller chain half a block away.
The Safeway went all in on self checkouts. The store is barely staffed, shelves are constantly empty, you have to have your receipt checked by security every time, they closed the second entrance permanently, and for some reason the place smells.
Other store has self checkouts but they also have loads of staff. I usually go through the normal checkout because it’s easier and since they have adequate staff and self checkout lines it tends to be about the same speed to.
End result is I don’t shop at Safeway if I can avoid it.
Retail pharmacists are human vending machines. You don't need AI. It's a computer prescription written by a far more qualified human which is then provided to a nigh-illiterate half-wit who will then try as hard as possible to misread it. Having then misread it, the patient must then coax them out of their idiocy until they apologize and fulfill what's written.
Meanwhile some Internet guy who gets all his information from the Internet will repeat what he's heard on the Internet. I know this because anyone passingly acquainted with this would have at least made the clarification between compounding pharmacists and retail pharmacists or something.
Also, after the prescription ends, they're still filling it. I just never pick it up. The autonomous flow has no ability to handle this situation, so now I get a monthly text that my prescription is ready. The actual support line is literally unmanned, and messages given it are piped to /dev/null.
The existing automation is hot garbage. But C-suite would have me believe our Lord & Savior, AI, will fix it all.
[1] "Nvidia Forecasts Decelerating Growth After Two-Year AI Boom" <https://news.ycombinator.com/item?id=45053175>
It emphasizes "AI adoption linked to 13% decline," which implies causation. The study itself only claims "evidence consistent with the hypothesis."
The article also largely highlights job loss for young workers, while only briefly mentioning cases where AI complements workers.
The study's preliminary status -- it is not peer reviewed -- is noted but only once and at end. If the article was more balanced it would have noted this at the beginning.
Articles on the same subject by the World Economic Forum, McKinsey, and Goldman Sachs are more balance and less alarmist.
I can’t think of a single job that modern AI could easily replace.
AI can now do it very cheap so no need to give that job to a human anymore.
It could replace many workers, perhaps sacrificing quality, but that's considered quite acceptable by those making these decisions because of the huge labor cost savings.
It also could raise the quality of work product for those working at a senior level by allowing them to rapidly iterate on ideas and prototypes. This could lower the need for as many junior workers.
Either it’s a cover for something or people are a bit too overzealous to believe in gains that haven’t materialised yet.
There is less upswing in reducing costs than in increasing profits. Companies want to increase profits actually, not just reduce costs which will be eaten away by competition. In a world where everyone has the same AIs, human still make the difference.
I know a handful of digital marketers, that work for different marketing firms - and the use of GenAI for those tasks have exploded. Usually tasks which they either had in-house people, or freelancers do the work.
Now they just do it themselves.
Coca Cola's christmas ad had AI slop in it last year. That doesn't seem very cheap or low stakes.
The worst thing for me would be just needing to get a job like I had before being a dev, the stakes are so much grander for all the companies. It's only really existential for the side of this that isn't me/us. I've been working since I was 15, I can figure it out. I'll be more happy cutting veggies in a kitchen than every single CEO out there when all is said and done!
But let's blame AI
For example, a call center might use the excuse of AI to fire a bunch of people. They would have liked to just arbitrarily fire people a few years ago, but if they did that people would notice the reduction in quality and perhaps realize it was done out of self-serving greed (executives get bigger bonuses / look better, etc). The AI excuse means that their service might be worse, perhaps inexcusably so, but no one is going to scrutinize it that closely because there is a palatable justification for why it was done.
This is certainly the type of effect I feel like underlies every story of AI firing I've heard about.
Exactly.
The big (biggest? ) problem of modernity is that quality is decorrelated from profit. There's a lot more money in having the optics of doing a good job than in actually doing it; the economy is so abstracted and distributed that the mechanism of competition to punish bad behavior, shitty customer service, low standards, crappy work, fraud... is very weak. There is too much information asymmetry, and the timescale of information propagation is too long to have much of an effect. As long as no one notices what you're fucking up very quickly you can get away with it for a long time.
Seems even worse to me. At least in the 'competition' paradigm there's a mechanism for things getting better for consumers. No such thing here.
The thing whose exact purpose is to replace labor? Must be a conspiracy going on to suggest its linked to reducing labor. Bias! Agenda!
> Prompt: Attached is a paper. Below is an argument made against it. Is there anything in the paper that addresses the argument?: High interest rates + tariff terror -> less investment -> less jobs
> High rates/firm shocks: They add firm–time fixed effects that absorb broad firm shocks (like interest-rate changes), and the within-firm drop for 22–25-year-olds in AI-exposed roles remains.
> “Less investment” story: They note the 2022 §174 R&D amortization change and show the pattern persists even after excluding computer occupations and information-sector firms.
> Other non-AI explanations: The decline shows up in both teleworkable and non-teleworkable jobs and isn’t explained by pandemic-era education issues.
> Tariffs: Tariffs aren’t analyzed directly; broad tariff impacts would be soaked up by the firm–time controls, but a tariff-specific, task-level channel isn’t separately tested.
If we replace all juniors with AI, in a few years there won't be skilled talent for senior positions.
AI assistance is a lot different than AI running the company. Making expensive decisions. While it could progress, bear in mind that some seniors continue to move up in the ranks. Will AI eventually be the CEO?
We all dislike how some CEOs behave, but will AI really value life at all? CEOs have to have some place to live, after all.
A human CEO might do morally questionable things. All do not, of course, but some may.
Yet even so, they need a planet with air, water, and some way to survive. They also may what their kids to survive.
An AI may not care.
It could be taking "bad CEO" behaviour to a whole new level.
And even if the AI had human ethics, humans play "us vs them" games all the time. You don't get much more "them" than an entirely different lifeform.
Correlation is not causation. The original research paper does not prove a connection.
> I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.
They are nonetheless subject to publish or perish pressure and have strong incentives to draw publishable attention-grabbing results even where the data is inconclusive.
Except they do, if their raw materials, tools, etc., are imported.
This is a complete reversal in the past where having a high headcount was an easy signal of a company's growth (i.e. more people, means more people building features, means more growth).
Investors are lazy. They see one line go down, they make the other line go up.
CEOs are lazy. They see line go up when other line goes down. So they make other line go down.
(I am aware that "line go up" is a stupid meme. But I think it's a perfect way to describe what's happening. It is stupid, lazy, absurd, memetic. It's the only thing that matters, stripped off of anything that is incidental. Line must go up.)
It just doesn't make sense to pay someone $10 when you can pay someone else $2
Nearly half the unicorns in the country were found by foreigners living in the country. https://gfmag.com/capital-raising-corporate-finance/us-unico...
The biggest problem right now is that there is no distinction between companies replacing Americans labor with cheap labor and entrepreneurial talent that creates jobs. Everyone is on the same visa.
Having to work with ESL contractors from firms like Cognizant or HCL is true pain. Normally it would be like 3-4 US employees working on something and then its like 20-30 ESL outsourced people working on something. The quality is so poor though its not worth it.
My current org nuked their contract w HCL after 2 years because how shitty they are and now everything is back onshore. Millions wasted lol. Corporations are so silly sometimes.
Saving money on wages isn't the only consideration.
Reducing in call centers has been going on for a while as more people use automated solutions (not necessarily AI) and many of the growing companies make it hard to reach a real person anyways (Amazon, Facebook, etc). I feel like AI is throwing fuel on the existing fire, but isn't as much of a driver as the headlines suggest.
https://digitaleconomy.stanford.edu/publications/canaries-in...
It looks like they're looking at data for the last few years, not just the last few months.
I haven't read it, and maybe you can disagree with their opinions, but there does appear to be a slow down in college graduates recently.
Less investment -> more layoffs -> "AI is replacing workers" -> This is good for AI.
A computer does something good -> "That's AI" -> This is good for AI.
A computer does something bad -> "It needs more AI" -> This is good for AI.
It presents a difference-in-differences (https://en.wikipedia.org/wiki/Difference_in_differences) design that exploits staggered adoption of generative AI to estimate the causal effect on productivity. It compares headcount over time by age group across several occupations, showing significant differentials across age groups.
Page 3: "We test for a class of such confounders by controlling for firm-time effects in an event study regression, absorbing aggregate firm shocks that impact all workers at a firm regardless of AI exposure. For workers aged 22-25, we find a 12 log-point decline in relative employment for the most AI-exposed quintiles compared to the least exposed quintile, a large and statistically significant effect."
The OP's point could still be valid: it’s still possible that macro factors like inflation, interest rates, or tariffs land harder on the exact group they label ‘AI-exposed.’ That makes the attribution messy.
pg. 19, "We run this regression separately for each age group."
Is AI being used to attempt to mitigate that effect?
I don't think their methods or any statistical method could decouple a perfectly correlated signal.
Without AI, would junior jobs have grown as quickly as other?
Then I hear about a lot of youngsters struggling to find work, and see articles like this.
Well, who's left? Is there a sweet spot at like 31 that are just cleaning up?
We can't call it incompetence because neither those whom we have come to know as capitalists nor their advisors are incompetent, which means they quite literally do not want to offset any decline in jobs or (job creation) that can be linked to progress.
That's not strange. A "capitalist" wants market participation to grow, infinitely, which is possible. Who we came to know as capitalists don't care about the markets, actual market growth or market participation. They only care about the growth of the value of the markets, "however" that happens.
I highly recommend that journalists and economists dig a bit more radically honest into the matter. There'd be more value in that, more blog posts, more articles, more discussions on all platforms, and thus more participation.
I mean it's a scapegoat vs straw man vs actual culprit kind of situation ... isn't it?
Note upfront: I'm not suggesting AI is not having an impact. That would be foolish. But I will say there's *a lot* less to the conclusion of this study, simply because the data is questionable. It's not that they did anything wrong per se. I won't say that here because it'll end up a HN cluster fuck. Cluster fuck aside, the caveats and associated doubt are enough to say, "Don't bet the farm on this study." Great bander for the bar? Sure.
It's an interesting study but I've seen it called "absolute proof" and other type things. Don't be fooled, it's not that.
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/...
From the original study:
> "This study uses data from ADP, the largest payroll processing firm in America. The company provides payroll services for firms employing over 25 million workers in the US. We use this information to track employment changes for workers in occupations measured as more or less exposed to artificial intelligence"
a) I'm calling this out because I've seen posts on LinkedIn saying it was a sample of 25M. Nope! ADP simply does payroll for that many.
b) The size of the US workforce is ~165M, making ADP's coverage ~15% of the workforce.
https://www.statista.com/statistics/191750/civilian-labor-fo...
c) Do the business ADP server come from particular industries, are of a particular size, in particular geographic locations? etc.? It's not only about the size of the sample - which we'll get to shortly - but the nature of the companies - which we'll also get to shortly.
> "We make several sample restrictions for our main analysis sample."
d) It's great that they say this, but it should raise an eyebrow.
> "We include only workers employed by firms that use ADP’s payroll product to maintain worker earnings records. We also exclude employees classified by firms as part-time from the analysis and subset to people between the age of 18 and 70."
e) Translation: we did a slight bit of pruning (read: cherry-picking).
> "The set of firms using payroll services changes over time as companies join or leave ADP’s platform. We maintain a consistent set of firms across our main sample period by keeping only companies that have employee earnings records for each month from January 2021 through July 2025."
f) Translation: More cherry-picking.
> "In addition, ADP observes job titles for about 70% of workers in its system. We exclude workers who do not have a recorded job title."
g) Translation: More cherry-picking.
> "After these restrictions we have records on between 3.5 and 5 million workers each month for our main analysis sample, though we consider robustness to alternative analyses such as allowing for firms to enter and leave the sample."
h) 3.5M to 5.0M feels like a large enough sample... if it wasn't so "restricted." Furthermore, there's no explanation on the 1.5M delta, and how adding or removing that much impacts the analysis.
i) And they considered that why? And did what they did why? It's a significant assumpt that gets nothing more than a hand wave?
> "While the ADP data include millions of workers in each month, the distribution of firms using ADP services does not exactly match the distribution of firms across the broader US economy."
j) Translation: as mentioned above ADP !== a representation of the broader economy.
> "Further details on differences in firm composition can be found in Cajner et al. (2018) and ADP Reserch (2025)."
j) Great there's a citation, but given the acknowledgement of the delta isn't at least a line or two in order? Something about the nature of the delta, and THEN mention the citation?
k) Editorial: You might think this hand-wave is ok, but to me it's usually indicative of a tell and a smell.
l) Finally, do understand the nature of academia and null research (which has been mentioned on HN). In short, there is a (career / financial) incentive to find something novel (read: worth publishing). You advance your career by doing not-null research.
Again, I'm not suggesting anything nefarious per se. But this study is getting A LOT of attention. All things considered, more than it objectively deserves.
__Again: I'm not suggesting AI is not having an impact. That would be foolish.__
Everyone is doubling down on hiring IN India right now. H1B isn’t even a thing. It’s offshoring to Indians that are utilizing AI to ship good enough slop. Everyone’s India office is rapidly expanding.
For example, I wonder how many fewer juniors were needed when we had better programming languages and tools? Do certain programming practices lead to fewer new workers? How many new factory workers aren’t hired on the factory floor due to a form of automation?
I’ll upvote you though because I hadn’t read the whole backstory of the luddites before.
Ostensibly it's to help programmers, or writers, or lawyers, or whomever. But those are just the users of AI.
The owners and buyers of AI at a company level are developing and using it to push down payroll expenses. That's it. It's to avoid paying people, and providing them benefits. Even if you fire 50% of your employees, realize it was a terrible mistake, and hire most of them back, it's a net reduction in payroll costs.
No idea if this will last long though.
This is inane. If an employer hired most of these employees back it means that firing them negatively impacted the bottom line.
But I do know that companies fire people and stay short-staffed just to keep payroll down all the time. Even when externally that seems like a terrible idea, and likely impacts bottom line. It's important to realize just how much companies hate payroll. And AI is a great way to try to reduce it.
Yes, stunt growth if that growth is immediately harmful to the public. Provide adverse incentives that increase the cost of replacing humans. Less or no government subsidies, incentives or tax breaks if you replace humans with LLMs. Even without replacing humans, tax LLM usage like cigarettes.
In the short term that is. over time, wind down these artificial costs, so that humans transition to roles that can't be automated by LLMs. Go to school, get training,etc.. in other fields. Instead of having millions of unemployed restless people collapsing your society.
But everyone is on the take, they want their short term lobbying money and stock tips so they can take what's theirs and run before the ship sinks. (if I can be a bit over dramatic :) )
Feudalism.
Ancient egypt (elite in pyramids, slaves otherwise) is more likely.
> That's optimistic.
> Ancient egypt (elite in pyramids, slaves otherwise) is more likely.
No you're both being optimistic. The feudal lords had a vital need for serfs, and the pharaohs slaves.
It'll be more like elite in pyramids, everyone else (who survives) lives like a rat in the sewers, living off garbage and trying to stay out of sight. Once the elite no longer need workers like us, they'll withdraw the resources they need to live comfortably, or to even live at all. They're not making more land, and the capitalist elite have "better" uses for energy than heating your home and powering your shit.
When you don’t need as many people because of automation, you also don’t need them to fight your wars. You use drones and other automated weapons. You don’t need things like democracy because that was to prevent people from turning to revolution, and that problem has been solved with automated weapons. So then you don’t really need as many people anymore, so you stop providing the expensive healthcare, food production, and water to keep them all alive
We have seen a lot of use of h1b and outsourcing despite the massive job shortage. Seeing lots of fake job sites filled with ai generated fake openings and paid membership for access to "premium jobs."
They're using ICE to effectively pay half the country to murder the other half, but the ICE budget is limited so that automated systems can then gun down the ICE community to replace 99.9% of humans with machines.
Ultimately this is great for Russia because they'll still be able to invade even if they have only 300 soldiers left in their military, after they hit a low orbit nuke blast to shutdown the Ai US, basically only Melania swinging her purse at the troops will be one of the few left alive to resist.
[0] I was going to going to mark this as sarcasm but then I remembered that the US elected Donald Trump as president, 2 times so far, so I'm going to play it straight.
a little bit late aren't we??? because if we do that, then we would still use postman to send message
This argument is vacuous if you consider a marginal worker. Let's say AI eliminates one worker, Bob. You could argue "it was better to amplify Bob and share the gains". However, that assumes the company needs more of whatever Bob produces. That means you could also make an argument "given that the company didn't previously hire another worker Bill ~= Bob, it doesn't want to share gains that Bill would have provided blah blah". Ad absurdum, any company not trying to keep hiring infinitely is doing rent extraction.
You could make a much more narrow argument that cost of hiring Bill was higher than his marginal contribution but cost of keeping Bob + AI is lower than their combined contribution, but that's something you actually need to justify. Or, at the very least, justify why you know that it is, better than people running the company.
"Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor."
No mention of rent-seeking.
No evidence they are being economically short-sighted.
> they'd rather own 100% of diminished capacity than share proceeds from exponentially increased capacity
They're using cheap AI to replace more expensive humans. There's no reason to think they are missing some exponential expansion opportunity that keeping those humans would achieve, and every reason to think otherwise.
Competition would fix a whole lot of problems.
The real disruption is whether we use it to multiply human potential, or to shrink it in the name of control.
Customer service handles all the things that customers aren't trusted to manage on their own with a self-service portal - that's the whole point of having a trusted human involved at all. Giving those tasks to LLMs won't work because the customer can just prompt inject their way to whatever toolcalls correspond to their desired outcome.
As a person who aspires to actually read documentation, try common troubleshooting, google it, etc. before calling support I'd really love to go directly to second-tier, but apparently bulk of support calls are low-effort users, and now they'd get the pleasure of LLM, instead of a person, telling them to reset their router, make sure the thingie on a spray bottle is in "on" position, or call the airline.
techpineapple•17h ago
“where AI is more likely to automate, rather than augment , human labor.”
Where is AI currently automating human labor? Not Software Engineering. Or - what’s the difference between AI that augments me so I can do the job of three people and AI that “automates human labor”
lotsofpulp•17h ago
JumpCrisscross•17h ago
If the field has a future.
stonemetal12•16h ago
If your job is to swing a hammer, then drill robot augments your job (your job is now swing hammer and drill hole).
How that is different from drill bot automating human driller's job is an exercise left to the reader.
HPsquared•16h ago
marcosdumay•7h ago
The paper says one of those is impacted, and the other isn't.
So, yeah, not only that's what the GP is asking, but I'd like to know it too.
WillPostForFood•16h ago
We also analyze how AI is being used for tasks, finding 57% of usage suggests augmentation of human capabilities (e.g., learning or iterating on an output) while 43% suggests automation (e.g., fulfilling a request with minimal human involvement).
From the data, software engineers are automating their own work, not augmenting. Anthropic's full paper is here:
https://arxiv.org/html/2503.04761v1
techpineapple•16h ago
tart-lemonade•15h ago
- Chief Executives
- Maintenance and Repair Workers, General
- Registered Nurses
- Computer and Information Systems Managers
After skimming [0], I can't seem to find a listing of jobs that would be augmented vs automated, just a breakdown of the % of analyzed queries that were augmenting vs automating, so I'm a bit confused where this is coming from.
[0]: https://arxiv.org/abs/2503.04761