Oh, wait, you are correct, AI never works. I see it now.
- Potential job loss, particularly in the bottom half or so of jobs.
- Further wealth inequality due to so many factors but primarily because the companies providing these tools will capture the dollars that would’ve been spent on the jobs mentioned above.
- NIMBY-ism. AI = data centers and people are overwhelmingly deciding they don’t want these near their homes. I live in the Midwest and it’s been amazing how much opposition has been showing up for these projects.
Of course all of these are based on the speculation and “promises” of the tech. Many feel the time is to act now rather than once it’s too late, on the off chance these things do happen.
"the bottom half" of desk jobs, maybe. But most jobs in "the bottom half" overall are not desk jobs, and therefore aren't going to be replaced with AI anytime soon. Think burger flippers, waiters, and retail clerks.
Some paper pushing asshole working for the government demands some paper bullshit. Some other paper pushing asshole working for bigco produces said paper. Is value actually created? Perhaps there's some risk mitigation but enough to justify their respective wages? And the need to push that paper back and forth locks the little guy out of competing in that market.
Yeah, it'll suck for a lot of people in the interim. But that will also put downward price pressure on a ton of things who's cost makes other value producing things not worth doing. If legal, design, engineering, etc, etc, services are made cheap in the "boring" cases then that becomes competitive advantage for the buyers which over time trickles down to their buyers and their buyers.
The poll they cite shows this is clearly not unique: https://www.pewresearch.org/global/2025/10/15/concern-and-ex.... The US is (just barely) at the top, but no country is anywhere close to "more excited than concerned", and several countries are basically equal to the US.
I live in America so that's my perspective, but I would be surprised if this article couldn't accurately describe a lot of other countries.
This article almost feels like some kind of psychological manipulation: "Jeez Americans, can't you just get on board like the rest of the world?"
The EU has strong worker protections and a robust social safety net. It’s not surprising to hear they are less antagonistic towards AI
In a separate research at https://www.bloomberg.com/news/articles/2025-06-20/trust-in-... , people found that low income countries have higher trust in AI.
I wonder about China. More generally, do countries deemed collectivist (https://worldpopulationreview.com/country-rankings/collectiv...) and supportive of their government tend to lean towards AI?
> In countries like China (83%), Indonesia (80%), and Thailand (77%), strong majorities see AI products and services as more beneficial than harmful. In contrast, optimism remains far lower in places like Canada (40%), the United States (39%), and the Netherlands (36%).
Even then I can just very easily ignore it
So I’m curious what specific examples you’re thinking about with respect to “forced down your throat”
I can actually see value in AI generated content with stuff that is already considered low value. Slap a image on your spam post. Hell, just write that spam post. Do that enough time and you might make more than you spend in time and money.
Seems like on net for individuals doing this there is some value on it.
AI is simply not making anyone's life better, so what's there to love exactly?
It may be true that AI is unpopular in the west more generally though, not just America.
I fully expect AI can and will be used to hurt the masses in those ways. But on the other hand the masses may very well be made substantially wealthier by AI cutting through all the arcane memorization and beurocratic bullshit that society has levied upon them to keep them down over the past 100yr.
Why do I need a $$$$ dentist to assess my X-rays. Why not a $$ hygienist assisted by AI and checked by the dentist in the odd cases? Now multiply that sort of labor reshuffling by every sector of the economy.
You mean "learning things?"
We're really trying to reframe knowledge and skill as a form of oppression now?
Those days are gone, never to return, and the rules are being rewritten to exclude the middle.
Amazing how things that are often largely rackets or intentional bureaucratic barriers can be framed as "learning" and people like you will go to bad for them.
It's not the learning that's the oppression. It's the going through the motions clerical bullshit that makes all sorts of value producing things not worth doing.
I want a garage with a taller than 9ft wall. But I can't have one because that requires engineered drawings that cost enough to be a non-starter because of the bullshit motions that must be gone through. So what do I do, I build a 9ft wall and I put trusses on top that give me more headroom. Everyone is worse off. Me, the concrete people, the engineer who didn't get my business, the enforcer bureaucrat who has less work to justify their job, the truss people who sell me cheaper trusses because I don't need a 2nd floor now.
You see how replacing the clerical labor the engineering firm would task with such a project with $20 of API calls would benefit literally everyone involved here?
AI is seemingly poised to make repetitive paper pushing administrative bullshit and access to clerical work outputs cheaper and more accessible. This benefits us all in the same way that cheaper access to computation benefitted wide swaths of the economy when computers became dominant or cheaper access to communications benefitted everyone when the internet became commonly used or way a reduction in energy prices is felt across the entire economy.
The only real "losers" here (assuming it doesn't happen lightening fast) are those who's only work output is such clerical tasks because instead of a force multiplier for them it's a replacement.
You seem to be on some kind of tilt about having to deal with local building codes about raising your garage ceiling, which is an entirely different thing. Many people are able to deal with that just fine. Maybe AI would make dealing with local bureaucracy easier, maybe it wouldn't? You're probably still going to need to fill out a form to raise your garage ceiling or whatever.
This comment reeks of "64k ought to be enough for anybody".
>local building codes
Because anything to do with land and building example of a highly bureaucratic and expensive process. The process doesn't care who writes the stuff as long as the numbers check out. I can't do it (well enough) and I can't justify the expense. I'm not asking for bespoke work, just cookie cutter stuff. So if the cost were to come down....
>Maybe AI would make dealing with local bureaucracy easier, maybe it wouldn't?
The bureaucracy isn't the hard part. They are like a shitty vending machine. They need the right amount of inputs for a given output and it needs to be of good quality, no wrinkled bills. The justifying the juice for the squeeze is the hard part. I literally cannot stamp drawings no matter how hard I learn and no engineer can stamp my drawings without essentially recomputing everything. Whatever situation one cares to look at there's always someone at a margin like this.
Think about it from the engineer's perspective. He's not some huge national firm that can afford bespoke software to tie his existing software together and make his employees lightening fast at what they do. He needs to pay someone to do all the "work" of plugging things in, choosing what sort of calculations to run, etc, etc. He can't bid me a price I can justify. Maybe we'll get to a point where an AI inbox assistant can reformat and shovel inputs to his team so that they can work faster, and therefore cheaper and scoop up all the potential work like mine.
Now replace the engineer with something else. Maybe you do tolerance studies for manufacturing. If you can make your stuff cheaper more people can afford to benefit from your stuff.
And so on and so on for every comparable situation (someone wants to do something, but can't justify the cost of the desk work they'd have to buy along the way) in the whole economy. There's a TON of potential upside. The cost of flooding the world with low quality written content, scam chatbots, AI porn and everything else seems low in comparison to me.
But again, we pretty much acomplished all the major goals of evolution - now we're pretty much just weaponizing it for enjoyment, pleasure and entertainment to keep ourselves from being bored. As for the rest of us where it just seems like the natural continuation to tickle the curiosity on our brains as depressing, dystopian and heartbreaking as it sounds we are currently heading towards accomplishing the final innovation we will produce as a human race: create something superior to our biological lifeforms.
Maybe uploading ourselves into a super computer isn't as sci-fi as we thought since it seems living as a normal human will become extremely difficult.
My apartment complex recently switched to an "AI assistant" that replaced the front office person. I haven't heard a single positive thing about it from anybody. It's utterly terrible, and they're absolutely not "passing the savings on" to residents by lowering rents.
Even with "vibe coding", we accept that it does a shit job, because it does an "okay enough" shit job that it's often usable, but nobody wants to maintain "vibe code".
The whiny petulant owner class punches down on the commoners again. To those so privileged: I prefer my privacy and sanity to that of your goofy futuristic fever dreams. Please knock it off. If you want me to build a statue in your honor then fix healthcare.
But now AI is threatening (promising?) to make those jobs go away, and the same folks are pissed.
If you wanted people to get on board with this, there should probably some sort of UBI/expansive social safety net in place because it turns out that if people have to choose between unfulfilling drudgery and not affording food… people take the drudgery
AI pushers are promising to take other jobs away, not the bullshit ones.
We're going to build a massive data center in your town. In one day, it will use as much electricity as you use in 10 years, and will produce more written words than you could write in 10 lifetimes. Its main purpose will be to eliminate your job, but it will have other uses, like generating images of your daughter in a bikini.
We do this in the hopes that it makes me (not you) very rich. Sounds good? Just kidding, we're not asking you!
Most genAI has been laughably poor at doing what it’s advertised at doing for the average person. People didn’t ask and don’t need a shoddy summary of their text notifications and they don’t want AI to take away their creative hobbies.
It's fine if genAI looks like the Palm Pilot today. Nothing says it will stay that way.
We saw rapid improvements in image and video generation but that’s actually proven to be super threatening to people, if not just embarrassing (see the Star Wars alien animal tech demo).
After three years of this, most genAI is crap, it has made most services worse and people very understandably don’t like it.
Where is the Siri that actually does what Apple announced back in 2024?
Television, for example, had many FCC regulations at its inception to ensure it served in the public interest. This of course devolved over time into nomimal compliance like showing community bulletins at 5am when no one was watching.
You might be somewhat correct with the release of the Internet upon the public in the early 90's, but imagine if common carrier rules were not in effect for the phone lines everyone was using to access the Internet back then. The phone companies would have loved to collect the per-minute charges AOL initially was doing before they went to unlimited. They already had a data solution in place - ISDN - but it was substantially more expensive from what I understand and targeted to business only.
With AI, it's the complete opposite, everything is full steam ahead and the government seems to be giving it its full blessing.
The public benefit here is that all sorts of "compliance" is made cheaper. I can see it already in the construction industry. Stuff you used to hire a firm for you use cheap labor for, they use AI, you have your "one old guy who's engineering license is kept up to date" check it, it gets some tweaks then passes his scrutiny. He submits it. Town approves it because it's legitimately right. High fives all around, three people just did something that used to take a much bigger team. The engineer would have had to decline that job before. The contractor too.
Of course, this all comes at the expense of whoever benefitted from having that barrier there in the first place.
The boss is lying, it's because Trump has caused a severe recession, and your boss is seeing revenue drop, not because AI is truly capable of replacing people. The boss simply wants to fire people due to recession WITHOUT signalling to shareholders that the next quarterly report is going to be ... unpleasant. But that's not what people hear.
What I mean is this is the financial "good news":
In better hands the technology would probably make the world a better place. But not in the hands of silicon valley billionaires.
The bucket they're referring to is labeled "more concerned than excited." It seems like rounding that to "hate" (in the headline) is misleading?
People can be "more concerned than excited" about the future while still using ChatGPT (or Claude Code) a lot. Even much of the management and workers at top AI labs could be put in the "more concerned than excited" bucket.
Maybe the headline should be "Why are Americans more concerned than excited about AI?"
Agreed, the NYT decided to prioritize drama over accuracy here. :(
Zuck can get an audience with the President, who can basically override any so-called independent agencies determinations. Congress is neutered and SCOTUS seems to enable this behavior. All while our power bills go up and we fuel data centers by polluting our environment
So I wonder if it’s not just AI, it’s AI with seemingly no recourse from the public to check capitalist excesses.
As a citizen, who doesn’t trust the government, or the media, or giant corporations, I’m also an 8/10 concerned.
That means I’m equally concerned about AI as I am excited, I might be more concerned than someone who isn’t excited at all about AI 1/10, but who is mildly concerned with a 5/10.
The problems with AI aren't technical they are political and economical. This topic is discussed in Max Tegmark's "Life 3.0", in which he theorises about various outcomes if we do invent AGI. He describes one possibility where we move to a post-scarcity society and people spends their days doing art and whatever else they fancy. Another option looks more like the world described in Elysium. I suspect the latter prediction feels more likely to most people.
Additionally, Americans are very technologically astitute in comparison to other countries, we were first adopters for AI. I think at this point its been proven that AI is underperforming all expectations, and thus we are starting to resent the sentitment that more and more of our economy needs to be directed towards this technology that still remains a pipedream rather than a reality.
you should expect fewer and fewer opportunities for smaller organizations to counter the power of larger organizations
So what are you going to functionally do about it?
What specifically are you going to do to invert that power dynamic?
And could you possibly also use AI to help convert that power dynamic?
After I get the quote, I call back and get a phone AI assistant instead to handle scheduling the job instead of the human secretary. The AI assistant does not understand my home address and keeps asking me over and over to repeat it, it was the most Black Mirror experience I've had in years. There was no option to speak to a person, so I hung up without getting the job scheduled. I wrote a bad Google review detailing my experience.
I called a different plumbing company (with a human on the other end) and got a second quote that was 30% less than the original, they came out and did the job.
Three weeks later, the owner of the first company reaches out making excuses as to why he wasn't flagged about my issue sooner and apologizing, but the job was done by that point.
I'd rather pay extra for services like plumbing if it means I can talk to a human because these LLM voice systems can't do basic scheduling calls. If they go off the rails, the customer is totally hosed. I don't wanna hear "oh it'll get better". The future we're headed towards is banal and dystopian beyond comprehension.
> People in many other developed democracies — Japan, Israel, Sweden, South Korea — had warm views of social media in a 2022 survey
I guess they cite that as some kind of jab at Americans ("look how crazy and backwards they are"), but, I am sorry, I don't see warms views of ad fueled tech megacorps sucking people's attention and harvesting clicks as a positive thing. If anything, someone else could turn right around and reword the article "as look at these other countries blindly trusting American tech corps with their privacy and attention".
I love the tech, and despise the people pushing it and how they want to use it. What should be used to free us from some work and allow us to focus more on human things (family, arts) and science, instead will be used to further divide and subjugate us, all in the interest of shifting more wealth and power from the working class up to those who have both in abundance.
1) Trump's spending on tax (last term) has caused a severe recession
2) Trump's recession is causing people to fire workers, BUT CEOs and ... don't want to admit it is because revenue's dropping and about to drop more, ie. because management is close to being forced to report a total disaster of a quarterly report. So no, you're fired "because AI can do your job" (meanwhile actual demonstrations of AI actually doing a single minimum-wage job ... even OpenAI "for some reason" doesn't demonstrate that)
What people don't seem to get is that AI's history of overpromising and underdelivering is about 3x as long as the one for Nuclear Fusion.
jqpabc123•2h ago
Lawyers use it to draft legal briefs.
Court sanctions lawyers for fake citations generated by AI.
https://natlawreview.com/article/court-sanctions-attorneys-s...
AI is inherently unreliable and untrustworthy. In this case, distrust is not just some emotional reaction but pretty well grounded in fact.
Once lawyers learn this themselves from experience, I expect they will move toward legally impressing this upon any who are slow/reluctant to admit as much.
Using technology that is widely known to be flawed for any sort of serious work is a textbook example of "negligence".
dkdcio•2h ago
big statement that doesn’t hold up under any technical scrutiny. “AI” —- neural networks —- are used reliably in production all over the place. signals filtering/analysis, anomaly detection, background blurring, medical devices, and more
assuming you mean LLMs, this still doesn’t hold up. it depends on the system around it. naively asking ChatGPT to construct a legal brief is stupid use of the tool. constructing a system that can reliably query over and point you to relevant data from known databases is not
Mountain_Skies•2h ago
bicx•2h ago
My approach is that you're responsible for anything you ship, and I don't care (within reason) how you generated it. Once it hits production, it's yours, and if it has any flaws, I don't want to hear "Well the AI just missed it or hallucinated." I don't fucking care. You shipped it, it's _your_ mistake now.
I use Claude Code constantly. It's a great tool. But you have to review the output , make necessary adjustments, and be willing to put your name on it.
gaigalas•38m ago
If it's bad: Human mistake.
If it's good: It's a great tool.
People are going to love that!