frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I guess I kinda get why people hate AI

https://anthony.noided.media/blog/ai/programming/2026/02/14/i-guess-i-kinda-get-why-people-hate-ai.html
98•NM-Super•1h ago

Comments

mjr00•47m ago
> Microsoft’s AI CEO is saying AI is going to take everybody’s job. And Sam Altman is saying that AI will wipe out entire categories of jobs. ANd Matt Shumer is saying that AI is currently like Covid in January 2020—as in, “kind of under the radar, but about to kill millions of people”.

> I legitimately feel like I am going insane when I hear AI technologists talk about the technology. They’re supposed to market it. But they’re instead saying that it is going to leave me a poor, jobless wretch, a member of the “permanent underclass,” as the meme on Twitter goes.

They are marketing it. The target customer isn't the user paying $20 for ChatGPT Pro, though; the customers are investors and CEOs, and their marketing is "AI is so powerful and destructive that if you don't invest in AI, you will be left behind." FOMO at its finest.

bubblewand•41m ago
This is also what OpenAI’s “safety” angle was all about.

“Ohhhh this is so scary! It’s so powerful we have to be very careful with it!” (Buy our stuff or be left behind, Mr. CEO, and invest in us now or lose out)

viccis•39m ago
Anthropic has been the most histrionic about this, with their big blog post about how they need to make sure their models don't feel like they are being emotionally abused by the users being the most fatuous example.
verdverm•33m ago
To me, Anthropic has done enough sketchy things to be on par with the players from Big Tech. They are not some new benevolent corporation backed by SV

Many users don't want to acknowledge this about the company making their fav ai

slowmovintarget•32m ago
"This is obviously why only we can be trusted with operating these models, and require government legislation saying so."

They're trying to get government to hand them a moat. Spoilers... There's no moat.

SoftTalker•5m ago
I was taken aback when I recently noticed a co-worker thanking ChatGPT for its answer.
scrollop•30m ago
Gpt2- "too dangerous to release"
qnleigh•9m ago
Oh funny, I forgot about that. But at the time it didn't seem unreasonable to withhold a model that could so easily write fake news articles. I'm not so sure it wasn't..
AstroBen•39m ago
The marketing is clearly effecting individual developers, too. There's a mass psychosis happening
dgxyz•32m ago
I think this is reality.

None of our much-promoted AI initiatives have resulted in any ROI. In fact they have cost a pile of cash so far and delivered nothing.

noosphr•22m ago
After spending nearly 5 years building software which uses AI agents on the back end I've come to the conclusion it's the PC revolution part 2.

Productivity gains won't show up on economic data and companies trying to automate everything will fail.

But the average office worker will end up with a much more pleasant job and will need to know how to use the models, just like who they needed to learn to use a PC.

Ancalagon•17m ago
Are these botted comments or just sarcasm?
co_king_5•21m ago
After spending nearly 10 years building software which uses AI agents I have to conclude that LLMs are the Industrial Revolution part 2.

Everything Will Change.

Ancalagon•17m ago
Are these botted comments or just sarcasm?
co_king_5•13m ago
Does it matter what I say, or are you going to call me a bot regardless because you're sensitive about my joke?
surgical_fire•3m ago
It's just like in Crypto days.

Back then, whenever there was a thread discussing the merits of Crypto, there would be people speaking of the certainty that it was the future and fiat currency was on its way out.

It's the same shit with AI. In part it's why I am tranquil about it. The disconnect in between what AI shills say and the reality of using it on a daily basis tell me what I need to know.

AstroBen•17m ago
thanks for your insightful contribution
molsongolden•13m ago
Many AI initiatives have had massive ROI though. The implementation problems are similar to any pre-AI tech rollout and hugely expensive non-AI tech implementations fail all the time.
dgxyz•10m ago
Name one that has at least $200mn ROI over capital investment. Show me the balance sheet for it as well. And make sure that ROI isn't from suddenly not paying salaries.
mjr00•31m ago
Maybe. I'm actually a big fan of Claude/Codex and use them extensively. The author of the article says the same.

> To be clear: I like and use AI when it comes to coding, and even for other tasks. I think it’s been very effective at increasing my productivity—not as effective as the influencers claim it should be, but effective nonetheless.

It's hard to get measured opinions. The most vocal opinions online are either "I used 15 AI agents to vibe code my startup, developers are obsolete" or "AI is completely useless."

My guess is that most developers (who have tried AI) have an opinion somewhere between these two extremes, you just don't hear them because that's not how the social media world works.

surgical_fire•19m ago
I use both Claude and Codex (Claude at work, Codex at home).

They are fine, moderately useful here and there in terms of speeding up some of my tasks.

I wouldn't pay much more than 20 bucks for it though.

dgxyz•18m ago
Well I've just watched two major projects fail which were running mostly on faith because someone read too many "I used 15 AI agents to vibe code..." blog posts and sold it to management. The promoters have a deep technical understanding of the problem domain we have but little understanding of what an LLM can achieve or what it can understand relating to the problem at hand.

Yes you can indeed vibe code a startup. But try building on that or doing anything relatively complicated and you're up shit creek. There's literally no one out there doing that in the influencer-sphere. It's all about the initial cut and MVP of a project, not the ongoing story.

The next failure is replacing a 20 year old legacy subsystem with 3MLOC with a new React / microservices thing. This has been sold to the directors as something we can do in 3 months with Claude. Project failure number three.

The only reality is no one learns or is accountable for their mistakes.

LouisSayers•12m ago
> I've just watched two major projects fail

This is an opportunity. You can have a good long career consulting/contracting for these types of companies.

dgxyz•9m ago
Why do you think I work there!

Emergency clean up work is ridiculous money!

eckesicle•6m ago
My experience has been a mixed bag.

AI has led us into a deep spaghetti hole in one product where it was allowed free rein. But when applied to localised contexts. Sort of a class at a time it’s really excellent and productivity explodes.

I mostly use it to type out implementations of individual methods after it has suggested interfaces that I modify by hand. Then it writes the tests for me too very quickly.

As soon as you let it do more though, it will invariably tie itself into a knot - all the while confidently ascertaining that it knows what it’s doing.

dgxyz•4m ago
On localised context stuff, yeah no. I spent a couple of hours rewriting something Claude did terribly a couple of weeks back. Sure it solved the problem, a relatively simple regression analysis, but it was so slow that it crapped out under load. Cue emergency rewrite by hand. 20s latency down to 18ms. Yeah it was that bad.
DrewADesign•4m ago
Rather than making a good product that’s useful to the world, the goal of current startups seems to be milking VCs who are desperately searching for the new version of the mobile phone revolution that will make this all ok… so it seems like they’re accomplishing their goal?

I reckon the reason the VC rhetoric has reached running-hair-dye-Giuliani-speech level absurdity isn’t because they’re trying to convince other people— it’s because they’re trying to convince themselves. I’d think it was funny as hell if my IRA wasn’t on the line.

crystal_revenge•8m ago
> "AI is completely useless."

This is a straw man. I don't know anybody who sincerely claims this, even online. However if you dare question people claiming to be solving impossible problems with 15 AI agents (they just can't show you what they're building quite yet, but soon, soon you'll see!), then you will be treated as if you said this.

AI is a superior solution to the problem Stack Overflow attempted to solve, and really great at quickly building bespoke, but fragile, tools for some niche problem you solve. However I have yet to see a single instance of it being used to sustainably maintain a product code base in any truly automated fashion. I have however, personally seen my team slowed down because code review is clogged with terribly long, often incorrect, PR that are largely AI generated.

verdverm•31m ago
Ai psychosis or ai++ psychosis?
co_king_5•23m ago
What is the difference between mass psychosis and a very effective marketing scheme?
crystal_revenge•14m ago
> There's a mass psychosis happening

There absolutely is but I'm increasingly realizing that it's futile to fight it.

The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?

But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.

AstroBen•10m ago
There's a good reason for that. The end result of exploring what they can actually do isn't very exciting or marketable

"I shipped code 15% faster with AI this month" doesn't have the pull of a 47 agent setup on a mac mini

thefilmore•3m ago
> There's a mass psychosis happening

Any guesses on how long this lasts?

brabel•37m ago
Can confirm. We don’t know if AI really is about to make programmers who write code by hand obsolete, but we sure as hell fear our competitors will ship features 10x faster than us. What is the logical next step?? Invest lots of money on AI or keep hoping it’s a fad and risk being left in the dust, even if you think that risk is fairly small?
dgxyz•34m ago
Perhaps stop entering into saturated markets and using AI to try and shortcut your way to the moon?

There's no way any LLM code generator can replace a moderately complex system at this point and looking at the rate of progress this hasn't improved recently at all. Getting one to reason about a simple part of a business domain is still quite difficult.

NitpickLawyer•20m ago
> and looking at the rate of progress this hasn't improved recently at all.

The rate of progress in the last 3 years has been over my expectations. The past year has been increasing a lot. The last 2 months has been insane. No idea how people can say "no improvement".

zozbot234•14m ago
"My car is in the driveway, but it's dirty and I need to get it washed. The car wash is 50 meters away, should I drive there or walk?"
votepaunchy•9m ago
Gemini flash tells me to drive: “Unless you have a very long hose or you've invented a way to teleport the dirt off the chassis, you should probably drive. Taking the car ensures it actually gets cleaned, and you won't have to carry heavy buckets of soapy water back and forth across the street.”
dgxyz•7m ago
Beep boop human thinking ... actually I never wash my car. They do it when they service it once every year!
qnleigh•14m ago
Yeah not that long ago, there was concern that we had run out of training data and progress would stall. That did not happen at all.
surgical_fire•10m ago
If your expectations were low, anything would have been over your expectations.

There was some improvement in terms of the ability of some models to understand and generate code. It's a bit more useful than it was 3 years ago.

I still think that any claims that it can operate at a human level are complete bullshit.

It can speed things up well in some contexts though.

sweetheart•14m ago
The recent developments of only the last 3 months have been staggering. I think you should challenge your beliefs on this a little bit. I don't say that as an AI fanboy (if those exist), it's just really, really noticeable how much progress has been made in doing more complex SWE work, especially if you just ask the LLM to implement some basic custom harness engineering.
dgxyz•6m ago
I'll let you know in 12 months when we have been using it for long enough to have another abortion for me to clean up.
AstroBen•6m ago
Why is it an all or nothing decision?

Do a small test: if you're 10x faster then keep going. If not, shelve it for a while and maybe try again later

empressplay•36m ago
It's worse than that. It's ultimately a military technology. The end-game here is to use it offensively and / or defensively against other countries. Whoever establishes dominance first wins. And so you have to push adoption, so that it gets tested and can be iterated. But this isn't about making money (they are losing it like crazy!) This is end-of-the world shit and about whoever will be left standing once all the dominoes fall -- if they ever fall (let's hope they don't!)

But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.

big_paps•27m ago
One often forgets this.
saltcured•24m ago
With all the wackiness around AI, is this some Mutually Assured Delusion doctrine?
monkpit•23m ago
Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.

Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.

daze42•16m ago
This 100%. We're in the middle of an AI Manhattan Project and if "we" give up or slow down, another company or country will get AGI before "us" and there's no coming back after that. If there's a chance AGI is possible, it doesn't make sense to let someone else take the lead no matter how dangerous it could be.
rep_lodsb•7m ago
The better analogy would be https://en.wikipedia.org/wiki/Project_Stargate

"If there's a chance psychic powers are real..."

bpodgursky•36m ago
You guys can hate him, but Alex Karp of Palantir had the most honest take on this recently which was basically:

"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)

You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.

tonyedgecombe•27m ago
Yeah but it’s in his interest to encourage an arms race with China.
testbjjl•24m ago
In China, I wonder if the same narrative is happening, no new junior devs, threats of obsolescence, etc. Or are they collectively, see the future differently?
steveklabnik•16m ago
Most reporting I've seen rhymes with this, from last year https://www.theguardian.com/technology/2025/jun/05/english-s...
SlightlyLeftPad•9m ago
They absolutely see the future differently because their society is already set up for success in an AI world. If what these predictions say become true, free market capitalism will collapse. What would be left?
dv_dt•36m ago
Saying it will take jobs is the marketing line to CEOs - more than you will be left behind.
hmmmmmmmmmmmmmm•28m ago
Except entry level jobs are already getting wiped out.
zozbot234•21m ago
The one entry level job that's been wiped out for good by LLMs is human marketing copywriters, i.e. the people whose job was to come up with the kind of slop LLMs learned from. They're just rebranding as copyeditors now because AI can write the slop itself, or at least its first draft.
apaosjns•23m ago
Sam Altman is a known sociopath who has no problem achieving his goals by any means necessary. His prior business dealings (and repeated patterns with OpenAI) are evidence of this.

Shumer is of a similar stock but less capable, so he gets caught in his lies.

I’m still shocked people work with Altman knowing his history, but given the Epstein files etc it’s not surprise. Our elite class is entirely rotten.

Best advice is trust what you see in front of your face (as much as you can) and be very skeptical of anything else. Everyone involved has agendas and no morals.

verdverm•10m ago
I'm shocked how congratulatory things were for OpenClaw joining Altman Inc
gmerc•3m ago
If you know the author you know it's a match made in heaven
parpfish•22m ago
something i wonder about with AI taking jobs --

similar to the ATM example in the article (and my experience with ai coding tools), the automation will start out by handling the easiest parts of our jobs.

eventually, all the easy parts will be automated and the overall headcount will be reduced, but the actual content of the remaining job will be a super-distilled version of 'all the hard parts'.

the jobs that remain will be harder to do and it will be harder to find people capable or willing to do them. it may turn out that if you tell somebody "solve hard problems 40hrs a week"... they can't do it. we NEED the easy parts of the job to slow down and let the mind wander.

zozbot234•18m ago
There's plenty of jobs like this already. They'll want to keep you around even if you're not doing much most of the time, because you can still solve the hard problems as they arise and grow organizational capital in other ways.
im3w1l•16m ago
When trying to look infer people's motives don't just look at what they are doing. Look also at what they aren't doing. Alternatives they had and rejected.

If marketing it was the sole objective there are many other stories they could have told, but didn't.

qnleigh•16m ago
Yeah I guess the subtext is 'AI is going to take over so much of the market that it's risky to hold anything else.'
linguae•16m ago
I’m also concerned about the continuing enshittification of software. Even without LLMs, we’ve had to endure slapdash software. Even Apple, which used to be perfectionistic, has slipped. I feel enshittification is a result of a lack of meaningful competition for many software products due to moats such as proprietary file formats and protocols, plus network effects. “Move fast and break things” software development methodologies don’t help.

LLMs will help such teams move and break things even faster than before. I’m not against the use of LLMs in software development, but I’m against their blind use. However, when there is pressure to ship as fast as possible, many will be tempted to take shortcuts and not thoroughly analyze the output of their LLMs.

SoftTalker•12m ago
So, what I don't get is, taking it to its logical conclusion, if AI takes all the jobs then who are your customers? Who will buy your stock? Who will buy the software that all the developers you used to employ used to write? How do these CEOs and investors see this playing out?
cmiles8•11m ago
You’re not supposed to ask such logical questions. It kills the AI vibe.
Frost1x•11m ago
Tech has slowly been moving that way anyways. In terms of ROI, you’re often much better off targeting whales and large clients than trying to become the ubiquitous market service for consumers. Competition is fierce and people are poor comparatively, so you need the volume for success.

Meanwhile if you go fishing for niche whales, there’s less competition and much higher ROI for them buying. That’s why a lot of tech isn’t really consumer friendly, because it’s not really targeting consumers, it’s targeting other groups that extract wealth from consumers in other ways. You’re selling it to grocery stores because people need to eat and they have the revenue to pay you, and see the proposition of dynamic pricing on consumers and all sorts of other things. Youre marketing it for analyzing communications of civilians for prying governments that want more control. You’re selling it to employers who want to minimize labor costs and maximize revenue, because they have millions or billions often and small industry monopolies exist all around, just find your niche whales to go hunting for.

And right now I’d say a lot of people in tech are happy to implement these things but at some point it’s going to bite you too. You may be helping dynamic pricing for Kroger because you shop at Aldi but at some point all of this will effect you as well, because you’re also a laboring consumer.

dgxyz•46m ago
I hate LLMs because you can solve any problem that LLMs can solve in a much better way but people are too stupid, cheap or lazy to put in the effort to do so and share it with everyone.

That and the whitewashing it allows on layoffs from failing or poorly planned businesses.

Human issues as always.

bananaflag•45m ago
> Being able to easily interact with banks, without waiting in a line that’s too long for the dum-dum you get at the end to be a real consolation, made people use banks more.

Actually, in my city, not the ATMs, but the apps which made it possible to do almost everything on the phone significantly reduced the number of banks in the last few years. I have to go very rarely to the bank, but, when I have to do, I see that another close one has closed and I have to go somewhere even farther.

v3xro•43m ago
> If I can somehow hate a machine that has basically stopped me from having to write boring boilerplate code, of course others are going to hate it!

Poor author, never tried expressive high-level languages with metaprogramming facilities that do not result in boring and repetitive boilerplate.

WolfeReader•10m ago
Honestly, this. The mainstream coding culture has spent decades dealing with shoehorning stateful OOP into distributed and multithreaded contexts. And now we have huge piles of code, getters and setters and callbacks and hooks and annotation processers and factories and dependency injection all pasted on top of the hottest coding paradigm of the 90's. It's too much to manage, and now we feel like we need AI to understand it all for us.

Meanwhile, nobody is claiming vast productivity gains using AI for Haskell or Lisp or Elixir.

lukev•3m ago
I mean, I find that LLMs are quite good with Lisp (Clojure) and I really like the abstraction levels that it provides. Pure functions and immutable data mean great boundary points and strong guarantees to reason about my programs, even if a large chunk of the boring parts are auto-coded.

I think there's lots of people like me, it's just that doing real dev work is orthogonal (possibly even opposed) to participating in the AI hype cycle.

nativeit•42m ago
The AI executives are marketing it—it’s just none of us are the target demographic. They are marketing it to executives and financiers, the people who construct the machinations to keep their industry churning, and those who begrudge the necessity of labor in all its forms.
lambdasquirrel•35m ago
Yup, if you haven’t heard first-hand (i.e. from the source) at least one story where some exec was at least using AI to intimidate his employees, or outright terminating them in some triumphant way (whether or not this was a sound business decision), then you’ve gotta be living in a bubble. AI might not be the problem but the way it’s being used is.
glimshe•42m ago
I suppose they mean "Why people who hate AI hate AI"... I don't hate AI and know many people who don't either. I find it quite useful but that's it.
KaoruAoiShiho•41m ago
Sam Altman gave millions to Andrew Yang for pushign UBI, so they are trying to forewarn and experiment with finding the right solution. Most of the world prefers to shove their heads in the sand though and call them grifters, so of course we'll do nothing until it's catastrophic.
Der_Einzige•41m ago
The amount of EM dashes and usage of negation does make me think AI wrote part of this. I'll give credit for lack of semicolons, but people are starting to get a bit better at "humanizing" their outputs.
abhaynayar•36m ago
There are em-dashes, but the writing feels nice and unlike the default ChatGPT style, so even if AI, (which it might not be, cause people do use em-dashes), I don't mind.
Nition•11m ago
I'm certainly seeing a huge amount of AI-assisted writing recently (that "I Love Board Games" article posted here yesterday was a good example), but I think this one is human-written. Pangram shows it as human written also.
twodave•41m ago
The reason I dislike AI use in certain modes is because the end result looks like a Happy Meal toy from McDonalds. It looks roughly like the thing you wanted or expected, but on even a casual examination it falls far short. I don’t believe this is something we can overcome with better models. Or, if we can then what we will end up writing as prompts will begin to resemble a programming language. At which point it just isn’t worth what it costs.

This tech is a breakthrough for so many reasons. I’m just not worried about it replacing my job. Like, ever.

abhaynayar•39m ago
> students rob themselves of the opportunity to learn, so they can… I dunno, hit the vape and watch Clavicular get framemogged

Hahah, this guy Gen-Zs.

chung8123•31m ago
Depending on how you use AI you can learn things a lot quicker than before. You can ask it questions, ask it to explain things, etc. Even if the AI is not ready for prime time yet, the vision of being able change how we learn is there.
mullingitover•39m ago
AI is scary, but look on the bright side:

Whenever there is a massive paradigm shift in technology like we have with AI today, there are absolutely massive, devastating wars because the existing strategic stalemates are broken. Industrialized precision manufacturing? Now we have to figure out who can make the most rifles and machine guns. Industrialized manufacturing of high explosives? Time to have a whole world war about it. Industrialized manufacturing of electronics? Time for another world war.

Industrialized manufacturing of intelligence will certainly lead to a global scale conflict to see if anyone can win formerly unwinnable fights.

Thus the concerns about whether you have a job or not will, in hindsight, seem trivial as we transition to fighting for our very survival.

Havoc•35m ago
To me global rise of full blown authoritarianism in every corner seems more plausible than a shooting war. The tech is very well suited for controlling people - both in the monitoring sense and destroying their ability to tell what’s really.

ie new stalemate in the form of multiple inward focused countries/blocs

BlackjackCF•30m ago
That was already happening without LLMs. LLMs will just make it worse.
goda90•21m ago
"We've always been at war with Eurasia"
verdverm•27m ago
Where were the massive devastating wars last time this happened with the internet and mobile phone?
squibonpig•23m ago
Yeah this was my thought as well
tgv•22m ago
You could say that it waged a silent war, and our kids' attention spans lost.
cvwright•17m ago
Very likely they got the causality backwards. Every time there’s a big war, technology advances because governments pour resources into it.
prewett•6m ago
Just for the sake of argument, I don't think the internet and mobile phones are military technologies, nor to GP use those examples.

> Industrialized manufacturing of electronics?

Ukraine seems to be exploring this and rewriting military doctrine. The Iranian drones the Russians are using seem to be effective, too. The US has drones, too, and we've discovered that drone bombing is not helpful with insurgencies; we haven't been in any actual wars for a while, though.

> Industrialized manufacturing of intelligence

I don't think we've gotten far enough to discover how/if this is effective. If GP means AI, then we have no idea. If GP means fake news via social media, then we may already be seeing the beginning effects. Both Obama and Trump had a lot of their support from the social media.

Having written this, I think I flatly disagree with GP that technology causes wars because of its power. I think it may enable some wars because of its power differential, but I think a lot is discovered through war. WWI discovered the limitations of industrial warfare, also of chemical weapons. Ukraine is showing what constellations of mini drones (as opposed to the US' solitary maxi-drones) can do, simply because they are outnumbered and forced to get creative.

mullingitover•3m ago
The internet and mobile phones weren't paradigm shifts for warfare. There were already mobile radios in WWII, so they fall under the 'industrialized manufacturing of electronics' bucket.
atemerev•21m ago
I am absolutely sure that WW3 is inevitable, for these exact reasons. Later, the survivors will be free to reorganize the society.
SoftTalker•2m ago
Nature likes to do occasional resets. Probably explains the Fermi paradox as well.
cdempsey44•38m ago
I have some friends who are embracing it and using it to transform their businesses (eg insurance sales), and others who hate it and think it should be banned (lawyers, white collar).

I think for a lot of people it feels like an inconvenient thing they have to contend with, and many are uncomfortable with rapid change.

writeslowly•36m ago
The vibes around the self-driving car hype (maybe 10 years ago?) felt very similar to me, but on a smaller scale. There was a lot of "You might like driving your car and having a steering wheel, but if you do, you're a luddite who will soon be forced to ride about in our featureless rented robot pods" type of statements, or that one AI scientist who was quoted saying we should just change laws around how humans are allowed to interact with streets to protect the self-driving cars.

Not all of it was like that, I think oddly enough it was Tesla or just Elon Musk claimng you'd soon be able to take a nap in your car on your morning commute through some sort of Jetsons tube or that you could let your car earn money on the side while you weren't using it, which might actually be appealing to the average person. But a lot of it felt like self-driving car companies wanted you to feel like they just wanted to disrupt your life and take your things away.

dmm•34m ago
> The classical cultural example is the Luddites, a social movement that failed so utterly

Maybe not the best example? The luddites were skilled weavers that had their livelihoods destroyed by automation. The govt deployed 12,000 troops against the luddites, executed dozens after show trials, and made machine breaking a capital offense.

Is that what you have planned for me?

drewbeck•4m ago
I caught that too. The piece is otherwise good imo, but "the luddites were wrong" is wrong. In fact, later in the piece the author essentially agrees – the proposals for UBI and other policies that would support workers (or ex-workers) through any AI-driven transition are an acknowledgement that yes, the new machines will destroy people's livelihoods and that, yes, this is bad, and that yes, the industrialists, the government and the people should care. The luddites were making exactly that case.

> while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan

I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).

zug_zug•34m ago
Part of what's going on here -- why we have this gap in what we say we fear and how we act, is just a human deficiency.

I remember when Covid got out of control in China a lot of people around me [in NY] had this energy of "so what it'll never come to us." I'm not saying that they believed that, or had some rational opinion, but they had an emotional energy of "It's not big deal." The emotional response can be much slower than the intellectual response, even if that fuse is already lit and the eventuality is indisputable.

Some people are good at not having that disconnect. They see the internet in 1980 and they know that someday 60 years from now it'll be the majority of shopping, even though 95% of people they talk to don't know what it is and laugh about it.

AI is a little-bit in that stage... It's true that most people know what it is, but our emotional response has not caught up to the reality of all of the implications of thinking machines that are gaining 5+ iq points per year.

We should be starting to write the laws now.

californical•9m ago
But it’s worth being careful - you could’ve said the same thing 3 years ago about NFTs. They were taking off and people made very convincing arguments about how it was the future of concert tickets, and eventually commerce in general.

If we started writing lots of laws around NFTs, it would just be a bunch of pointless (at best), or actively harmful laws.

Nobody cares about NFTs today, but there were genuinely good ideas about how they’d change commerce being spouted by a small group of people.

People can say “this is the future” while most people dismiss them, and honestly the people predicting tectonic shifts are usually wrong.

I don’t think that the current LLM craze is headed for the same destiny as NFTs, but I don’t think that the “LLM is the new world order” crowd is necessarily more likely to be correct just because they’re visionaries.

b8•32m ago
Yeah, it's FUD. AI can't even do customer service jobs well and CEOs make hyperbole statements that they will replace 30% of job.
FartyMcFarter•30m ago
In my experience dealing with e.g. Amazon Prime Video customer service, the actual people working on customer service can't do those jobs well either. As an example, I've complained multiple times to them about live sports streams getting interrupted and showing an "event over" notice while the event is still happening. It's a struggle to get them to understand the issue let alone to get it fixed. They haven't been helpful a single time.

So if AI improves a bit, it might be better than the current customer service workers in some ways...

co_king_5•14m ago
Amazon isn't interested in giving you a quality experience.

The customer service reps are warm bodies for sensitive customers to yell at until they tire themselves at.

Tolerating your verbal abuse is the job.

Amazon ever intended to improve the quality of the service being offered.

You're not going to unsubscribe, and if you did they wouldn't miss you.

ctoth•31m ago
> And Matt Shumer is saying that AI is currently like Covid in January 2020—as in, "kind of under the radar, but about to kill millions of people"

This is where the misrepresentation... no, the lie comes in. It always does in these "sensible middle" posts! the genre requires flattening both sides into dumber versions of themselves to keep the author positioned between two caricatures. Supremely done, OP.

If you read Matt's original article[0] you see he was saying something very different. Not "AI is going to kill lots of people" but that we're at the point on an exponential curve where correct modeling looks indistinguishable from paranoia to anyone reasoning from base rates of normal experience. The analogy is about the epistemic position of observers, not about body counts.

[0]: https://shumer.dev/something-big-is-happening

amelius•31m ago
People hate AI because it does all the fun jobs.
big_paps•26m ago
Haha, funny but not true !
cmiles8•28m ago
The cracks are showing, and all the “AI is going to eliminate 50% of white collar jobs” fear mongering is simply signaling we’re in the final stages before the bubble implosion.

The AI bros desperately need everyone to believe this is the future. But the data just isn’t there to support it. More and more companies are coming out saying AI was good to have, but the mass productivity gains just aren’t there.

A bunch of companies used AI as an excuse to do mass layoffs only to then have to admit this was basically just standard restructuring and house cleaning (eg Amazon).

Theres so much focus on white collar jobs in the US but these have already been automated and offshored to death. What’s there now is truly a survival of the fittest. Anything that’s highly predicable, routine, and fits recurring patterns (ie what AI is actually good at) was long since offshored to places like India. To the extent that AI does cause mass disruption to jobs, the India tech and BPO sectors would be ground zero… not white collar jobs in the US.

The AI bros are in a fight for their careers and the signal is increasingly pointing to the most vulnerable roles out there at the moment being all those tangentially tacked onto the AI hype cycle. If real measurable value doesn’t show up very soon (likely before year end) the whole party will come crashing down hard.

verdverm•23m ago
My hunch is the year of the AI Bubble is the same one as the Linux Desktop
ass22•17m ago
The openclaw stuff for me is a prime signal we are now reaching the maximal size of the bubble before it pops - the leaders of the firms at the frontier are lost and have no vision.

There isnt gonna be a huge event in the public markets though, except for Nvidia, Oracle and maybe MSFT. Firms that are private will suffer enormously though.

bovermyer•26m ago
My feelings on AI are complicated.

It's very useful as a coding autocomplete. It provides a fast way to connect multiple disparate search criteria in one query.

It also has caused massive price hikes for computer components, negatively impacted the environment, and most importantly, subtly destroys people's ability to understand.

doctorpangloss•23m ago
Once I read a reference to Clavicular, I realized that the very first thing this author should do is stop reading the NYTimes. If the goal is to experience things closer to reality haha.
ergocoder•23m ago
> I have a friend who is a new TA at a university in California. They’ve had to report several students, every semester, for basically pasting their assignments into ChatGPT.

We've solved this problem before.

You have 2 separate segments:

1. Lessons that forbid AI 2. Lessons that embrace AI

This doesn't seem that difficult to solve. You handle it like how you handle calculators and digital dictionaries in universities.

Moving forward, people who know fundamentals and AI will be more productive. The universities should just teach both.

parpfish•16m ago
this is tough because we've spent years building everything in education to be mediated by computers and technology, and now we're realizing that maybe we went a little overboard and over-fit to "lets do everything on computers".

it was easy to force kids to learn multiplication tables in their head when there were in-person tests and pencil-and-paper worksheets. if everything happens through a computer interface... the calculator is right there. how do you convince them that it's important to learn to not use it?

if we want to enforce non-ai lessons, i think we need to make sure we embrace more old-school methods like oral exams and essays being written in blue books.

testbjjl•23m ago
Is this job obsolescence narrative top of mind in China? I wonder if they are they seeing these developments differently?
atemerev•23m ago
Well, the process cannot be stopped or paused, whether we like it or not, for a few relatively obvious reasons.

And relying on your government do do the right thing as of 2026 is, frankly, not a great idea.

We need to think hard ourselves how to adapt. Perhaps "jobs" will be the thing of the past, and governments will probably not manage to rule over it. What will be the new power structures? How do we gain a place there? What will replace the governments as the organizing force?

I am thinking about this every day.

everdrive•19m ago
"Change is scary,"

This is not the point the author was making, but I think this phrase implies that it's merely fear of change which is the problem. Change can bring about real problems and real consequences whether or not we welcome it with open arms.

ttuominen•12m ago
Luddites weren't against technology. “They just wanted machines that made high-quality goods and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.” https://www.smithsonianmag.com/history/what-the-luddites-rea...
fer•10m ago
Disclaimer: self plug[0]

I honestly believe everything will be normalized. A genius with the same model as I will be more productive than I, and I will be more productive than some other people, exactly the same as without AI.

If AI starts doing things beyond what you can understand, control and own, it stops being useful, the extra capacity is wasted capacity, and there are diminishing returns for ever growing investment needs. The margins fall off a cliff (and they're already negative), and the only economic improvement will come from Moore's Law in terms of power needed to generate stuff.

The nature of the work will change, you'll manage agents and what not, I'm not a crystal ball, but you'll still have to dive into the details to fix what AI can't, and if you can't, you're stuck.

[0]https://www.fer.xyz/2026/02/llm-equilibrium

avazhi•9m ago
People don’t hate AI because they’re scared of it taking their jobs. They hate it because it’s massively overhyped while simultaneously being shoved down their throats like it’s a panacea. If and when AI, whether in LLM form or something else, actually demonstrates genuine intelligence as opposed to clearly probabilistic babble and cliche nonsense, people will be a lot more excited and open to it. But what we have currently is just dogshit with a few neat tricks to cover up the smell.
hmmmmmmmmmmmmmm•7m ago
this feels like a comment out of 2023. Ever since reasoning models they have become much more than "probabilistic babble".
ludwigvan•9m ago
> The people in charge of AI keep telling me to hate it

Anthropic’s Dario Amodei deserves a special mention here. Paints the grimmest possible future, so that when/if things go sideways, he can point back and say, "Hey, I warned you. I did my part."

Probably there is a psychological term that explains this phenomenon, I asked ChatGPT and it said it could be considered "anticipatory blame-shifting" or "moral licensing".

SmirkingRevenge•7m ago
Almost hard to remember now, but many tech companies used to be well liked - even Facebook at one time. The negative externalities of social media or smartphones were not apparent right away. But now people live with them daily

So feelings have soured and tech seems more dystopian. Any new disruptive technology is bound to be looked upon with greater baseline cynicism, no matter how magical. That's just baked in now, I think.

When it comes to AI, many people are experiencing all the negative externalities first, in the form of scams, slop, plagiarism, fake content - before they experience it as a useful tool.

So it's just making many people's lives slightly worse from the outset, at least for now

Add all that on top of the issues the OP raises and you can see why so many are have bad feelings about it

crassus_ed•4m ago
Nice read! The main benefit for me is the reduced search times for anything I need to look up online. Especially for code you can find relevant information ware more quickly.

One improvement for your writing style: it was clear to me that you don’t hate AI though, you didn’t have to mention that so many times in your story.

Bellingcat's Online Open Source Investigation Toolkit

https://bellingcat.gitbook.io/toolkit
1•toomanyrichies•32s ago•0 comments

'Tehran' producer Dana Eden dies during filming

https://appleinsider.com/articles/26/02/16/tehran-producer-dana-eden-dies-during-filming
1•LaSombra•1m ago•0 comments

Call for community support to secure Mautic's financial future

https://mautic.org/blog/urgent-call-for-community-support-to-secure-mautics-financial-future/
1•mooreds•3m ago•0 comments

I built a simple framework to stop switching between side projects

https://buildprophecy.com/start
1•1manstartup•3m ago•0 comments

Ludovic Slimak on Neanderthals

https://english.elpais.com/science-tech/2026-02-16/ludovic-slimak-on-neanderthals-it-was-suicide-...
1•t-3•3m ago•0 comments

AgentDocks – open-source GUI for AI agents that work on your real codebase

https://github.com/LoFiTerminal/AgentDocks
1•LoFiTerminal•3m ago•1 comments

In 2026, I'm no longer interested in 'working on myself'

https://www.vogue.in/content/in-2026-im-no-longer-interested-in-working-on-myself
1•bookofjoe•3m ago•0 comments

You are not (just) your brain

https://essays.debugyourpain.com/p/you-are-not-just-your-brain
1•yichab0d•4m ago•1 comments

How Not to Answer the Salary Question

https://adatosystems.com/2026/02/16/blog-how-not-to-answer-the-salary-question/
2•mooreds•4m ago•0 comments

Open Collective Europe Is Becoming Open Source Europe

https://opencollective.com/europe/updates/were-becoming-open-source-europe-and-we-want-to-build-t...
1•eXpl0it3r•5m ago•0 comments

Robert Duvall Dead at 95

https://www.newsweek.com/entertainment/hollywood-legend-robert-duvall-dead-at-95-11531295
8•glimshe•6m ago•1 comments

Fat-P: 745K-line C++20 library written by AI

https://github.com/schroedermatthew/FatP
1•mschroeder1971•8m ago•0 comments

Enduring AI Businesses

https://rohan.ga/blog/ai_company/
1•ocean_moist•8m ago•0 comments

"I Was a Director at Amex When They Started Replacing Us with $30K Workers" [video]

https://www.youtube.com/watch?v=t5fXrPMGM5E
3•only-one1701•9m ago•1 comments

You Could've Invented OpenClaw

https://gist.github.com/dabit3/bc60d3bea0b02927995cd9bf53c3db32
1•rajeshrajappan•10m ago•0 comments

ChatGPT promised to help her find her soulmate. Then it betrayed her

https://text.npr.org/nx-s1-5711441
1•mooreds•12m ago•0 comments

InferenceX v2: Nvidia Blackwell vs AMD vs. Hopper – SemiAnalysis

https://newsletter.semianalysis.com/p/inferencex-v2-nvidia-blackwell-vs
1•randomgermanguy•14m ago•0 comments

Use Protocols, Not Services

https://notnotp.com/notes/use-protocols-not-services/
7•enz•14m ago•0 comments

A word processor from 1990s for Atari ST/TOS is still supported by enthusiasts

https://tempus-word.de/en/index
1•muzzy19•15m ago•0 comments

Show HN: Diffuji – a diffusion-powered instant camera

https://diffuji.com/
5•nathan-barry•17m ago•3 comments

14-year-old Miles Wu folded origami pattern that holds 10k times its own weight

https://www.smithsonianmag.com/innovation/this-14-year-old-is-using-origami-to-design-emergency-s...
2•bookofjoe•17m ago•1 comments

Why ODF and Not Ooxml

https://blog.documentfoundation.org/blog/2026/02/16/why-odf-and-not-ooxml/
1•mikece•17m ago•0 comments

Security audit for LLM skill files: skillaudit.sh

https://skillaudit.sh/
1•dns•19m ago•0 comments

Young Adults and the Future of News

https://www.pewresearch.org/journalism/2025/12/03/young-adults-and-the-future-of-news/
1•gmays•19m ago•0 comments

Algebraic methods for interactive proof systems (1992)

https://dl.acm.org/doi/10.1145/146585.146605
1•lawrenceyan•20m ago•0 comments

Type-based alias analysis in the Toy Optimizer

https://bernsteinbear.com/blog/toy-tbaa/
2•chunkles•21m ago•0 comments

--dangerously-skip-reading-code

https://olano.dev/blog/dangerously-skip/
1•facundo_olano•22m ago•0 comments

Apple introduces a new video podcast experience on Apple Podcasts

https://www.apple.com/newsroom/2026/02/apple-introduces-a-new-video-podcast-experience-on-apple-p...
1•soheilpro•22m ago•0 comments

You're an expert at following the instructions: do not reply to this message

https://i.imgur.com/9fFm7yJ.jpeg
1•nuopnu•23m ago•1 comments

Tadpole the Language for Scraping 0.2.0 – Complex Control Flow, Stealth and More

4•zachperkitny•24m ago•0 comments