There are so many things called "AI" these days, that studies like this are basically meaningless. I think (hope) most people's views can't be reduced to a single binary question.
Big companies are diving straight into the mustache-twirling benefits "for the business" and of course people will push back.
AI can help you in the near term and harm you in the long term.
I think the more people use AI the more their view shifts from the former to the latter.
Sure but that has nothing to do with long/short term.
Everything to do with have/have not.
Let's read again.
> 76% of AI experts said AI would benefit them personally, while only 24% of the U.S. public said the same.
Think 76% of financial experts said higher tax on low earners would benefit them, whilst only 24% of the public said the same.
There is one thing I found to be true over and over again no matter what the anchor point is for the conversation, no matter the context, no matter someone’s sentiment, etc: nobody likes to have their time wasted.
LLM’s are incredibly useful for cutting corners. It makes it very easy to waste people’s time. No matter how useful they are, no matter the use case you have found, no matter the integration, people keep encountering bad search results and people sending them clearly LLM-generated work that wastes their time.
Unless somebody comes up with a cure for that, there will always be a significant portion of the population that is hostile to LLM’s - and rightfully so! No promise of productivity will overcome that.
TL;DR: the biggest problem with LLM’s is that it enables people to waste other people’s time.
These were always poor proxy metrics for "good content," but in a lot of environments, especially professional ones, they were how work was evaluated. Naturally others used LLM to generate content that satisfies these metrics.
The slop epidemic is a consequence of what people erroneously valued for so long. Now they have it, and it's meaningless, and even if most of it was always meaningless, they can't easily tell the difference between "fluff with something meaningful" and "fluff with only fluff" anymore.
Impersonal corporation which has been improving their capability to make you give up in disgust for decades jumping on the AI bandwagon? Check.
Voice recognition system that doesn't? Check.
Dunning-Kruger level responses once you finally get your voice recognized? Check.
#1? They run on an unlimited power source. Human gullibility.
In 20 years the thanksgiving dinner fights over AI equality are going to be wild.
>I'm not a bigot I support trans rights. But clankers aren't welcome in our share house.
>> OK Millennial. I'm a cyborg with 95% of my brain running in a private server.
A cynical part of me says it's something everybody can hate. I can see both sides taking that. I can't see either side embracing it as part of the left or right identity.
Maybe more it's a conflict between those with power and those without. Like return to office, or open offices, or cubicles before that, and probably many other things back to the luddites and earlier.
I want to use AI to do your job.
I don't want someone else to use AI to do my job.
I don't want to spend my attention on AI content that takes more time to consume than create.
> I don't want someone else to use AI to do my job.
This is just hypocrisy quite honestly.
People want to buy a new GPU, add RAM, a new SSD, or hard drive. All of these have doubled or quadrupled in price in just a few months.
Then there are reddit threads every day where I think 30% of the original posts and comments are AI generated spam. If I see a post with emdashes or anything that ends by asking for "thoughts?" I just down vote and report as spam. I want to interact with actual humans not AI bots.
Then we see posts about AI data centers and electricity use which will lead to higher electric bills for ordinary people if demand is higher than supply.
This is ignoring all the stuff about people losing jobs.
So why should the video game playing population or even the general population be in support of AI? Of course it has uses but there are so many negatives right now it is easy for me to understand why people are already sick of it.
That hasn't really played out in reality. The correlation between datacenter capacity growth and electricity price growth is poor.
https://www.economist.com/content-assets/images/20251101_USC...
The article lists off all the obvious and credible reasons why people are opposed to AI in the intro paragraph. It then spends the next 25 paragraphs advancing a very clever pet theory derived psychology about what might be going on here. While interesting in its own right, the article misses the obvious concerns that it raised in the intro paragraph.
Disagreed. It in an attempt to paint the real reasons for anti-"AI" sentiment as unreasonable, period.
https://hai.stanford.edu/ai-index/2026-ai-index-report/publi...
I think the sentiment is still a valid one, and I think it's an accurate assessment, but you're right that it's probably an AI-"assisted" throwaway.
A small group of people are going to acquire immerse wealth and power from this new technology
80% of everyone else will be facing possibility of losing jobs or reduced income, if they still have job
This will be another rust belt decades, but for white collar jobs and costal states
notsoendbfb•1h ago
People are increasingly hostile towards AI because they’re realizing there’s a good chance it turns out to be the most toxic and destructive thing that humanity has ever invented. The creation of modern AI may be looked back on as the worst thing that humanity has ever done — if there’s even enough humanity and enough truth left to reflect.