That and the whitewashing it allows on layoffs from failing or poorly planned businesses.
Human issues as always.
Actually, in my city, not the ATMs, but the apps which made it possible to do almost everything on the phone significantly reduced the number of banks in the last few years. I have to go very rarely to the bank, but, when I have to do, I see that another close one has closed and I have to go somewhere even farther.
Poor author, never tried expressive high-level languages with metaprogramming facilities that do not result in boring and repetitive boilerplate.
Meanwhile, nobody is claiming vast productivity gains using AI for Haskell or Lisp or Elixir.
I think there's lots of people like me, it's just that doing real dev work is orthogonal (possibly even opposed) to participating in the AI hype cycle.
This tech is a breakthrough for so many reasons. I’m just not worried about it replacing my job. Like, ever.
Hahah, this guy Gen-Zs.
Whenever there is a massive paradigm shift in technology like we have with AI today, there are absolutely massive, devastating wars because the existing strategic stalemates are broken. Industrialized precision manufacturing? Now we have to figure out who can make the most rifles and machine guns. Industrialized manufacturing of high explosives? Time to have a whole world war about it. Industrialized manufacturing of electronics? Time for another world war.
Industrialized manufacturing of intelligence will certainly lead to a global scale conflict to see if anyone can win formerly unwinnable fights.
Thus the concerns about whether you have a job or not will, in hindsight, seem trivial as we transition to fighting for our very survival.
ie new stalemate in the form of multiple inward focused countries/blocs
> Industrialized manufacturing of electronics?
Ukraine seems to be exploring this and rewriting military doctrine. The Iranian drones the Russians are using seem to be effective, too. The US has drones, too, and we've discovered that drone bombing is not helpful with insurgencies; we haven't been in any actual wars for a while, though.
> Industrialized manufacturing of intelligence
I don't think we've gotten far enough to discover how/if this is effective. If GP means AI, then we have no idea. If GP means fake news via social media, then we may already be seeing the beginning effects. Both Obama and Trump had a lot of their support from the social media.
Having written this, I think I flatly disagree with GP that technology causes wars because of its power. I think it may enable some wars because of its power differential, but I think a lot is discovered through war. WWI discovered the limitations of industrial warfare, also of chemical weapons. Ukraine is showing what constellations of mini drones (as opposed to the US' solitary maxi-drones) can do, simply because they are outnumbered and forced to get creative.
I think for a lot of people it feels like an inconvenient thing they have to contend with, and many are uncomfortable with rapid change.
Not all of it was like that, I think oddly enough it was Tesla or just Elon Musk claimng you'd soon be able to take a nap in your car on your morning commute through some sort of Jetsons tube or that you could let your car earn money on the side while you weren't using it, which might actually be appealing to the average person. But a lot of it felt like self-driving car companies wanted you to feel like they just wanted to disrupt your life and take your things away.
Maybe not the best example? The luddites were skilled weavers that had their livelihoods destroyed by automation. The govt deployed 12,000 troops against the luddites, executed dozens after show trials, and made machine breaking a capital offense.
Is that what you have planned for me?
> while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan
I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).
I remember when Covid got out of control in China a lot of people around me [in NY] had this energy of "so what it'll never come to us." I'm not saying that they believed that, or had some rational opinion, but they had an emotional energy of "It's not big deal." The emotional response can be much slower than the intellectual response, even if that fuse is already lit and the eventuality is indisputable.
Some people are good at not having that disconnect. They see the internet in 1980 and they know that someday 60 years from now it'll be the majority of shopping, even though 95% of people they talk to don't know what it is and laugh about it.
AI is a little-bit in that stage... It's true that most people know what it is, but our emotional response has not caught up to the reality of all of the implications of thinking machines that are gaining 5+ iq points per year.
We should be starting to write the laws now.
If we started writing lots of laws around NFTs, it would just be a bunch of pointless (at best), or actively harmful laws.
Nobody cares about NFTs today, but there were genuinely good ideas about how they’d change commerce being spouted by a small group of people.
People can say “this is the future” while most people dismiss them, and honestly the people predicting tectonic shifts are usually wrong.
I don’t think that the current LLM craze is headed for the same destiny as NFTs, but I don’t think that the “LLM is the new world order” crowd is necessarily more likely to be correct just because they’re visionaries.
So if AI improves a bit, it might be better than the current customer service workers in some ways...
The customer service reps are warm bodies for sensitive customers to yell at until they tire themselves at.
Tolerating your verbal abuse is the job.
Amazon ever intended to improve the quality of the service being offered.
You're not going to unsubscribe, and if you did they wouldn't miss you.
This is where the misrepresentation... no, the lie comes in. It always does in these "sensible middle" posts! the genre requires flattening both sides into dumber versions of themselves to keep the author positioned between two caricatures. Supremely done, OP.
If you read Matt's original article[0] you see he was saying something very different. Not "AI is going to kill lots of people" but that we're at the point on an exponential curve where correct modeling looks indistinguishable from paranoia to anyone reasoning from base rates of normal experience. The analogy is about the epistemic position of observers, not about body counts.
The AI bros desperately need everyone to believe this is the future. But the data just isn’t there to support it. More and more companies are coming out saying AI was good to have, but the mass productivity gains just aren’t there.
A bunch of companies used AI as an excuse to do mass layoffs only to then have to admit this was basically just standard restructuring and house cleaning (eg Amazon).
Theres so much focus on white collar jobs in the US but these have already been automated and offshored to death. What’s there now is truly a survival of the fittest. Anything that’s highly predicable, routine, and fits recurring patterns (ie what AI is actually good at) was long since offshored to places like India. To the extent that AI does cause mass disruption to jobs, the India tech and BPO sectors would be ground zero… not white collar jobs in the US.
The AI bros are in a fight for their careers and the signal is increasingly pointing to the most vulnerable roles out there at the moment being all those tangentially tacked onto the AI hype cycle. If real measurable value doesn’t show up very soon (likely before year end) the whole party will come crashing down hard.
There isnt gonna be a huge event in the public markets though, except for Nvidia, Oracle and maybe MSFT. Firms that are private will suffer enormously though.
It's very useful as a coding autocomplete. It provides a fast way to connect multiple disparate search criteria in one query.
It also has caused massive price hikes for computer components, negatively impacted the environment, and most importantly, subtly destroys people's ability to understand.
We've solved this problem before.
You have 2 separate segments:
1. Lessons that forbid AI 2. Lessons that embrace AI
This doesn't seem that difficult to solve. You handle it like how you handle calculators and digital dictionaries in universities.
Moving forward, people who know fundamentals and AI will be more productive. The universities should just teach both.
it was easy to force kids to learn multiplication tables in their head when there were in-person tests and pencil-and-paper worksheets. if everything happens through a computer interface... the calculator is right there. how do you convince them that it's important to learn to not use it?
if we want to enforce non-ai lessons, i think we need to make sure we embrace more old-school methods like oral exams and essays being written in blue books.
And relying on your government do do the right thing as of 2026 is, frankly, not a great idea.
We need to think hard ourselves how to adapt. Perhaps "jobs" will be the thing of the past, and governments will probably not manage to rule over it. What will be the new power structures? How do we gain a place there? What will replace the governments as the organizing force?
I am thinking about this every day.
This is not the point the author was making, but I think this phrase implies that it's merely fear of change which is the problem. Change can bring about real problems and real consequences whether or not we welcome it with open arms.
I honestly believe everything will be normalized. A genius with the same model as I will be more productive than I, and I will be more productive than some other people, exactly the same as without AI.
If AI starts doing things beyond what you can understand, control and own, it stops being useful, the extra capacity is wasted capacity, and there are diminishing returns for ever growing investment needs. The margins fall off a cliff (and they're already negative), and the only economic improvement will come from Moore's Law in terms of power needed to generate stuff.
The nature of the work will change, you'll manage agents and what not, I'm not a crystal ball, but you'll still have to dive into the details to fix what AI can't, and if you can't, you're stuck.
Anthropic’s Dario Amodei deserves a special mention here. Paints the grimmest possible future, so that when/if things go sideways, he can point back and say, "Hey, I warned you. I did my part."
Probably there is a psychological term that explains this phenomenon, I asked ChatGPT and it said it could be considered "anticipatory blame-shifting" or "moral licensing".
So feelings have soured and tech seems more dystopian. Any new disruptive technology is bound to be looked upon with greater baseline cynicism, no matter how magical. That's just baked in now, I think.
When it comes to AI, many people are experiencing all the negative externalities first, in the form of scams, slop, plagiarism, fake content - before they experience it as a useful tool.
So it's just making many people's lives slightly worse from the outset, at least for now
Add all that on top of the issues the OP raises and you can see why so many are have bad feelings about it
One improvement for your writing style: it was clear to me that you don’t hate AI though, you didn’t have to mention that so many times in your story.
mjr00•47m ago
> I legitimately feel like I am going insane when I hear AI technologists talk about the technology. They’re supposed to market it. But they’re instead saying that it is going to leave me a poor, jobless wretch, a member of the “permanent underclass,” as the meme on Twitter goes.
They are marketing it. The target customer isn't the user paying $20 for ChatGPT Pro, though; the customers are investors and CEOs, and their marketing is "AI is so powerful and destructive that if you don't invest in AI, you will be left behind." FOMO at its finest.
bubblewand•41m ago
“Ohhhh this is so scary! It’s so powerful we have to be very careful with it!” (Buy our stuff or be left behind, Mr. CEO, and invest in us now or lose out)
viccis•39m ago
verdverm•33m ago
Many users don't want to acknowledge this about the company making their fav ai
slowmovintarget•32m ago
They're trying to get government to hand them a moat. Spoilers... There's no moat.
SoftTalker•5m ago
scrollop•30m ago
qnleigh•9m ago
AstroBen•39m ago
dgxyz•32m ago
None of our much-promoted AI initiatives have resulted in any ROI. In fact they have cost a pile of cash so far and delivered nothing.
noosphr•22m ago
Productivity gains won't show up on economic data and companies trying to automate everything will fail.
But the average office worker will end up with a much more pleasant job and will need to know how to use the models, just like who they needed to learn to use a PC.
Ancalagon•17m ago
co_king_5•21m ago
Everything Will Change.
Ancalagon•17m ago
co_king_5•13m ago
surgical_fire•3m ago
Back then, whenever there was a thread discussing the merits of Crypto, there would be people speaking of the certainty that it was the future and fiat currency was on its way out.
It's the same shit with AI. In part it's why I am tranquil about it. The disconnect in between what AI shills say and the reality of using it on a daily basis tell me what I need to know.
AstroBen•17m ago
molsongolden•13m ago
dgxyz•10m ago
mjr00•31m ago
> To be clear: I like and use AI when it comes to coding, and even for other tasks. I think it’s been very effective at increasing my productivity—not as effective as the influencers claim it should be, but effective nonetheless.
It's hard to get measured opinions. The most vocal opinions online are either "I used 15 AI agents to vibe code my startup, developers are obsolete" or "AI is completely useless."
My guess is that most developers (who have tried AI) have an opinion somewhere between these two extremes, you just don't hear them because that's not how the social media world works.
surgical_fire•19m ago
They are fine, moderately useful here and there in terms of speeding up some of my tasks.
I wouldn't pay much more than 20 bucks for it though.
dgxyz•18m ago
Yes you can indeed vibe code a startup. But try building on that or doing anything relatively complicated and you're up shit creek. There's literally no one out there doing that in the influencer-sphere. It's all about the initial cut and MVP of a project, not the ongoing story.
The next failure is replacing a 20 year old legacy subsystem with 3MLOC with a new React / microservices thing. This has been sold to the directors as something we can do in 3 months with Claude. Project failure number three.
The only reality is no one learns or is accountable for their mistakes.
LouisSayers•12m ago
This is an opportunity. You can have a good long career consulting/contracting for these types of companies.
dgxyz•9m ago
Emergency clean up work is ridiculous money!
eckesicle•6m ago
AI has led us into a deep spaghetti hole in one product where it was allowed free rein. But when applied to localised contexts. Sort of a class at a time it’s really excellent and productivity explodes.
I mostly use it to type out implementations of individual methods after it has suggested interfaces that I modify by hand. Then it writes the tests for me too very quickly.
As soon as you let it do more though, it will invariably tie itself into a knot - all the while confidently ascertaining that it knows what it’s doing.
dgxyz•4m ago
DrewADesign•4m ago
I reckon the reason the VC rhetoric has reached running-hair-dye-Giuliani-speech level absurdity isn’t because they’re trying to convince other people— it’s because they’re trying to convince themselves. I’d think it was funny as hell if my IRA wasn’t on the line.
crystal_revenge•8m ago
This is a straw man. I don't know anybody who sincerely claims this, even online. However if you dare question people claiming to be solving impossible problems with 15 AI agents (they just can't show you what they're building quite yet, but soon, soon you'll see!), then you will be treated as if you said this.
AI is a superior solution to the problem Stack Overflow attempted to solve, and really great at quickly building bespoke, but fragile, tools for some niche problem you solve. However I have yet to see a single instance of it being used to sustainably maintain a product code base in any truly automated fashion. I have however, personally seen my team slowed down because code review is clogged with terribly long, often incorrect, PR that are largely AI generated.
verdverm•31m ago
co_king_5•23m ago
crystal_revenge•14m ago
There absolutely is but I'm increasingly realizing that it's futile to fight it.
The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.
Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?
But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.
AstroBen•10m ago
"I shipped code 15% faster with AI this month" doesn't have the pull of a 47 agent setup on a mac mini
thefilmore•3m ago
Any guesses on how long this lasts?
brabel•37m ago
dgxyz•34m ago
There's no way any LLM code generator can replace a moderately complex system at this point and looking at the rate of progress this hasn't improved recently at all. Getting one to reason about a simple part of a business domain is still quite difficult.
NitpickLawyer•20m ago
The rate of progress in the last 3 years has been over my expectations. The past year has been increasing a lot. The last 2 months has been insane. No idea how people can say "no improvement".
zozbot234•14m ago
votepaunchy•9m ago
dgxyz•7m ago
qnleigh•14m ago
surgical_fire•10m ago
There was some improvement in terms of the ability of some models to understand and generate code. It's a bit more useful than it was 3 years ago.
I still think that any claims that it can operate at a human level are complete bullshit.
It can speed things up well in some contexts though.
sweetheart•14m ago
dgxyz•6m ago
AstroBen•6m ago
Do a small test: if you're 10x faster then keep going. If not, shelve it for a while and maybe try again later
empressplay•36m ago
But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.
big_paps•27m ago
saltcured•24m ago
monkpit•23m ago
Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.
daze42•16m ago
rep_lodsb•7m ago
"If there's a chance psychic powers are real..."
bpodgursky•36m ago
"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)
You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.
tonyedgecombe•27m ago
testbjjl•24m ago
steveklabnik•16m ago
SlightlyLeftPad•9m ago
dv_dt•36m ago
hmmmmmmmmmmmmmm•28m ago
zozbot234•21m ago
apaosjns•23m ago
Shumer is of a similar stock but less capable, so he gets caught in his lies.
I’m still shocked people work with Altman knowing his history, but given the Epstein files etc it’s not surprise. Our elite class is entirely rotten.
Best advice is trust what you see in front of your face (as much as you can) and be very skeptical of anything else. Everyone involved has agendas and no morals.
verdverm•10m ago
gmerc•3m ago
parpfish•22m ago
similar to the ATM example in the article (and my experience with ai coding tools), the automation will start out by handling the easiest parts of our jobs.
eventually, all the easy parts will be automated and the overall headcount will be reduced, but the actual content of the remaining job will be a super-distilled version of 'all the hard parts'.
the jobs that remain will be harder to do and it will be harder to find people capable or willing to do them. it may turn out that if you tell somebody "solve hard problems 40hrs a week"... they can't do it. we NEED the easy parts of the job to slow down and let the mind wander.
zozbot234•18m ago
im3w1l•16m ago
If marketing it was the sole objective there are many other stories they could have told, but didn't.
qnleigh•16m ago
linguae•16m ago
LLMs will help such teams move and break things even faster than before. I’m not against the use of LLMs in software development, but I’m against their blind use. However, when there is pressure to ship as fast as possible, many will be tempted to take shortcuts and not thoroughly analyze the output of their LLMs.
SoftTalker•12m ago
cmiles8•11m ago
Frost1x•11m ago
Meanwhile if you go fishing for niche whales, there’s less competition and much higher ROI for them buying. That’s why a lot of tech isn’t really consumer friendly, because it’s not really targeting consumers, it’s targeting other groups that extract wealth from consumers in other ways. You’re selling it to grocery stores because people need to eat and they have the revenue to pay you, and see the proposition of dynamic pricing on consumers and all sorts of other things. Youre marketing it for analyzing communications of civilians for prying governments that want more control. You’re selling it to employers who want to minimize labor costs and maximize revenue, because they have millions or billions often and small industry monopolies exist all around, just find your niche whales to go hunting for.
And right now I’d say a lot of people in tech are happy to implement these things but at some point it’s going to bite you too. You may be helping dynamic pricing for Kroger because you shop at Aldi but at some point all of this will effect you as well, because you’re also a laboring consumer.