Why does this lie not go uncontradicted in the article? How does increasing the productivity of software engineers benefit anyone but employers? How do "AI" tools benefit humanity?
"AI" should stay in Academia, where it belongs.
The same reason a tractor benefits society, not just the farmer. Labor is free'd up for other uses, like inventing computers.
Who is going to pay laid off software engineers to spend time on inventing anything?
Fortunately, "AI" decreases productivity in software engineering, so this question is academic. But the Atlantic should mention these issues.
Citation needed.
- Understanding code: I can dump thousands of lines of undocumented code into a large context model and get back architectural outlines, API usage examples or even full documentation. I can also ask questions directly of the codebase.
- Debugging. The newer reasoning models are extremely good at deciphering a test failure or stack trace and pinpointing the potential causes of the bug, sending me right to the relevant code.
- Suggesting where to make edits. I've started asking models for a high level implementation plan based on the codebase and dumping in an issue thread - they frequently spot parts of the code I forgot I would need to update.
- "What should I test?" - dump in a diff and get back a checklist of branches I need to add test coverage for
At this point, if you can't find ways to make yourself more productive as a programmer with LLMs I have to conclude that you just aren't trying very hard.
Over my career, I've known many software engineers who were laid off. The ones I kept track of:
1. got another job, sometimes in another field
2. started their own business
3. retired
4. became a consultant
Does it? ROI stays generally about the same.
The reason is simple: highly profitable businesses attract competition, causing their prices to fall.
Capital also has a much easier time organizing politically than the poor masses, so over time the government will also introduce policies that benefit capital over labor, as we can very clearly see.
The farmer owns the farm and benefits directly from the improvement in operating margins. A software engineer does not own the farm, only the owner would benefit from improved productivity. They're actually getting paid less per hour given they're more productive in this hypothetical.
I would be shocked if more than a tenth of a percent of people who write code for a living work in startups, period.
And maybe it happens to software engineers next. So what? The economy looks completely different today than it did 50 years ago, which was completely different than 50 years before that, and that shouldn't stop just because some people feel childishly entitled to do the same work for their whole lives even it if it is obsolete. I'll just change careers like I have done twice before. There's a massive shortage of electrical/plumbing/hvac contractors. There's a massive shortage of nurses / doctors that will only get worse as the population gets older. Not as cushy as my mid six figure tech job, but I have no god given right to that. And there's plenty more opportunity beyond that for anyone willing to take it, so if any other engineers want to cry about it, their tears will be wasted on me.
Like you said, owning farmland wasn’t as common as everyone assumes. The number of farming analogies where everyone imagines the farmer as operating a lucrative business empire on land they own is a testament to how much people manufacture historical narratives to fit their desired narratives.
Productivity really is good for everyone. It’s the reason quality of life has improved dramatically in the past 50 years despite real wages being stagnant to declining.
During WW2, the Japanese would take several weeks to make an airfield, using a large labor force with picks and shovels.
They were horrified when the SeeBees showed up with bulldozers making forward airbases operational within hours.
I grew up in a family with a lot of farmers and I can tell you this is not universally true.
It’s very common for farmers to have leased their farmland or fields.
I can also say we are all very much better off with farming automated on a large scale. Farming jobs were brutally difficult in the past
Instead, AI is treated as the boogeyman and progress is treated as the devil. These events should be an impetus towards class consciousness, but instead the hate is directed to the token producing machine. I wonder how much of this redirection is a deliberate psyop.
https://en.m.wikipedia.org/wiki/Lump_of_labour_fallacy
HN is often economically illiterate.
But robots need no compensation, no health insurance, they don't have any rights, can work 24/7, they cannot sue the company because they have no legal representation, cannot steal from the company, they can suffer an "fatal" accident with minimal liability to the company, they can erase their memory of any liability to the company, will always say yes to everything, training them (for a human = pre-k, kindergarden, elementary school, middle school, high school, college, graduate school, etc) for them is transferring a file or receiving commands wirelessly from a central computer, they don't get bored and quit, they don't get unhappy and start looking for jobs. Good luck competing against that.
Once one ML model automates a job, you can make a million copies of that model file and that job is gone. Similar to the Borg (one adapts, all adapt). Once robots start building robots, you get an exponential amount of robots that quickly take over.
A human needs protection from temperature, noise, injuries, exposure to chemicals, exposure to biohazard, protection from other workers, from management, from animals, from their work equipment and gear, and... from robots! Every aspect of a human is protected including their privacy, biometrics, etc. On the other hand, robots require protection from: nothing. They are completely expendable and their cost will be driven down over time.
If you make a robot 33% as productive as a human per unit of time, because it can work 24 hrs instead of 8 hrs, it has the same output.
Society in 2025 is a pyramid scheme based on scarcity and now the base of the pyramid will be taken over by robots. The people who are left outside the system will be the government's problem. But unlike in the industrial age, large amounts of people will no longer be a key to power so their political representation and rights will be evaporate. There will be no universal basic income (that would result in humans multiplying geometrically and eating up all the resources). Robots will be the key to power therefore who controls the robots will control the government, that means, oligarchs or superhuman AI. And then at least one oligarch will give full autonomy to the AI to prevail against the other oligarchs (humans are bottlenecks), and in doing so the one who rules everything will be superhuman AI.
You may think that people in the future will have cheap access to AGI and therefore be able to scrape a living somehow... the problem is that once large AI conglomerates get all the data they need, access to cheap AI will be cutoff and all the deskilled masses will be left with will be endless brainrot social media feeds and nothing to eat.
Nowadays we have a completely different economy to begin with, saturated with "bullshit jobs" (Graeber) already.
The amount of new inventions is finite. In the 1990s we had TGV, Concorde and Maglev trains. Perhaps physical inventions have been somewhat exhausted?
What invention is on the horizon that will provide new jobs for those laid off from "bullshit jobs"?
How did society support Einstein when he discovered relativity? It didn't: Despite the invention of the tractor in 1892, which, according to HN commentators, should have provided him with a carefree life, he had to take a job in the patent office. Which, according to "AI" fanatics, would now be automated by "AI".
> he had to take a job in the patent office
Do you have any idea how much a patent clerk makes
I vividly remember hearing this from a group of elderly adults when I was around maybe 8 years old.
I remember being very sad that I missed it: The good times of the economy, the period where all the cool things were invented, the “good times” that were being ruined by all of this new technology that the elderly people thought were against the natural way of how things should be.
And it’s always wild to me that I see the same fallacies repeated on HN, inevitably by people who are also convinced that the good times are about to end and all of this new-fangled technology stuff is evil because they just figured out how the world worked in their middle age years and now it’s all changing in ways they find scary.
I mean, how much higher is quality of life now compared to 1970? Like... a teensy bit. That's not true from 1915 to 1970 though.
In some ways it's gotten worse. Yes we have these cool phones, but now you're chained to constant communication all the time. Life was a lot simpler and less distracting when your phone was on your wall.
For example, even with vastly more cars on the road road, our air quality is better in most major urban areas. Some of that is political, but a lot of it is technology advances since the 70s.
The cars themselves are faster, safer and more efficient. Again some of that is political policy and a lot of that is technology.
You can now light your whole home for the same amount of energy that you would have used in the 1970s to light a single overhead fixture in a room. All on the back of technology that didn’t exist until the 90s.
For a mere $50 a month you can talk to anyone anywhere in the world in high quality audio and video for an infinite amount of time. In the 1970s just calling the next city over was a cost you had to worry about incurring, let alone calling someone half the world away.
Almost all of the vastness of human knowledge is available for free or nearly free online, expanding your reach beyond what your local library has in stock.
Yes there is a lot of junk out there. But there always has been a lot of junk. There have always been snake oil salesmen and scammers. There are more of them with new scams enabled by technology sure, but again that’s been true of all time. I don’t see evidence that we are uniquely overwhelmed with garbage relative to the benefits we’re accruing.
And there's a lot of recentish developments that were considered impossible or hadn't even been conceived of. In medicine we've made HIV prevention/treatment with a pretty much 100% success rate, Hepatitis C antivirals, MS treatments with hopeful prognoses, a huge swathe of cancer treatments that make cancer overall more survivable than not, and mRNA vaccines are incredibly promising. Just personally, Vyvanse is an indescribably more effective and pleasant experience than Ritalin. MRIs are also pretty much magic. They're more or less the holy grail of imaging. Only issue is that they're still expensive, but we're working hard on ~room temperature superconductors.
In clean power, heat pumps have become obscenely efficient, and solar panels are both very effective and very cheap, pairing well with batteries. Induction stoves are also getting quite good.
Drones have benefited immensely from more efficient computing and better imaging, which isn't necessarily all for the best but is something that could absolutely not have been done in 1970.
What else, what's cool but also improves day to day life? Well, modern elevators are dramatically faster. I, personally, enjoy having wheels on my suitcase, and the modern omni-directional ones are a hugely better experience than the first versions. Oh, E-bikes! Those are really really cool. I guess that's just batteries again?
I'm sure there's a ton of stuff that didn't even occur to me because I'm blind to it, and to be entirely honest, didn't experience the year 1970.
Please edit out swipes like this from comments on HN.
>While many workers fear that automation or artificial intelligence will take their jobs, history has shown that when jobs in some sectors disappear, jobs in new sectors are created. One example is the United States, where a century of increasing productivity and technological improvements changed the percentage of Americans employed in the production of food from 41% of the workforce in 1900 to 2% in 2000.
The problem with AI is that it tends to automate away skilled jobs, ones that sometimes required many years of study and educational debt to get.
So the net result, with respect to software engineering, which was the context of the discussion is here, is not "job creation." The net result is that people (especially the junior ones who just got out of school with a pile of debt) are now forced to compete for jobs they aren't necessary any more qualified for than people without those degrees.
This applies to other things where AI massively reduces the number of people needed. Journalism, art asset creation, etc. These are the kinds of cushy and often fulfilling jobs that are only possible because our grandparents and their parents, and so on, worked backbreaking or mind numbing jobs to build the kind of economy that would support these kind of careers. Thanks AI! My kid might never have to worry about the horrors of creative engineering or artistic careers, freeing up room for the real joy of being a gig economy slave or a factory shift worker!
This is one of the most ridiculous things I’ve heard on here. Perfect example of conspiratorial thinking.
Automation has always resulted in economic growth. This time will be no different.
Our economic understanding is inherently limited to just capitalism. That's all we know, and our theories are constrained within that environment. However, throughout human history there have been a plethora of economies that operated under different rules.
> Automation has always resulted in economic growth
This, while true, does not mean that further automation will result in economic growth.
For example, under a capitalist economy, 100% automation results in the entire economy collapsing. Because the economy is predicated on consumerism, which requires employment. In order to produce things, you need people to buy them.
Seems simple, but it's a huge problem with capitalism that we repeatedly bandaid. Obviously, we can't let people who can't work just rot in the street, because that's unpleasant. So we develop systems like disability, Medicaid, Social Security, education, etc to bolt-on top of capitalism to fill in the gaps.
We are completely and utterly not prepared for extreme automation.
But if economic growth is your only goal then there are plenty of reliable options, like another major war, that are even better for long term economic growth. Whatever it takes to make line go up.
The worst part is that AI is absolutely not going to achieve any real "intelligence" or automate anything truly difficult. It's just going to nibble away at a bunch of formerly desirable careers.
It's going to take some time here before people learn how AI fits into the every day and along the way there's going to be some wild stuff.
I also expect there to be a very very strong addiction or cult-like behavior. Mental health will suffer as well. Very dangerous times.
Are the jobs created good jobs - that pay decently, and that people want to do?
It benefits every single consumer of the software products. They get more features faster, or pay less for the same set of features. That's the entire justification of the free market -- corporations benefit by benefiting consumers with better products via competition.
> How do "AI" tools benefit humanity?
Like literally every single other technological advance that makes things easier or more productive. Nothing special about it in that sense.
Or pay the same, and the employers/shareholders pocket the difference?
> That's the entire justification of the free market -- corporations benefit by benefiting consumers with better products via competition.
That doesn't mean the prices would drop; if every corporation employs the same technology, and they were roughly at the same level of efficiency, they will stay relatively the same, so the relative costs won't change, and neither will the actual prices. The level of the profit margins will just go up in the industry as the whole, which has precedents in the history.
Just postulating that the consumer surplus simply has to stay with the consumers doesn't translate to it actually staying with the consumers; companies have researched lots of ways to capture it over the last couple hundred years.
And competition between companies ensures that no, the profit margins in the industry don't go up in the long-term. You can look up corporate profit margins yourself. They can go up briefly at times and they can go down, but there's no long-term trend of profit margins increasing over the decades. Competition is alive and well to ensure that benefits do indeed go to consumers in most cases where there aren't monopolies that need to be regulated.
You do not need a textbook to see that all income of the middle classes goes to rent, health care and education, whereas 40 years ago the middle class could afford genuine Persian rugs. It will get worse.
> [...] he told the Senate something else: Too much regulation would be “disastrous” for America’s AI industry. Perhaps—but it might also be in the best interests of humanity.
Is anyone whose decision on this matters motivated by "the best interests of humanity"?
For that matter, the public being represented by decision-makers also has more pressing concerns -- like economic insecurity, a sense of declining national prestige, and (depending on ideology) fear of the general direction that the government is headed. Risks of AI that the piece mentions -- like innovation requiring water and maybe fossil fuels, or allusions to sci-fi AI superintelligence threat -- aren't high priorities. But the US profiting from AI sounds pretty good.
The article doesn't claim that they are — but maybe that they should be.
And it doesn't look to me like AI will solve economic insecurity, declining national prestige, or governmental ideology.
> But the US profiting from AI sounds pretty good.
It's an open question who exactly in the US will profit.
While an underground bunker might offer some protection against certain AI threats, it's not a guaranteed sanctuary from all potential AI-related dangers.
Here's why:
1. Physical Access:
Robotics and Automation: Advanced AI could control robots capable of breaching or bypassing traditional bunker defenses.
Advanced Weaponry: AI could potentially develop or deploy weaponry designed to penetrate or neutralize bunkers.
2. Cyber Attacks: Networked Bunkers: If a bunker is connected to external networks, it could still be vulnerable to cyberattacks launched by AI, potentially disabling critical systems.
Compromised Devices: AI could target devices or systems brought into the bunker, potentially gaining control or access to the bunker's internal network.
3. Information Warfare: Propaganda and Manipulation: AI could be used to spread misinformation or manipulate bunker occupants through targeted propaganda.
Psychological Warfare: AI could potentially analyze and exploit the psychological vulnerabilities of individuals within the bunker, potentially undermining their morale or cohesion.
4. AI Evolution: Unforeseen Capabilities: As AI evolves, it may develop capabilities that are currently impossible to anticipate, making it difficult to predict or prepare for all potential threats.
OpenAI certainly put LLMs on the map...but something isn't right over there. There's some smells.
Dont stop pretending.Keep going!
mitchbob•1d ago