A more immediate notion, perhaps, but definitely not scarier than human extinction.
Anyway, in terms cultural change, I think the emerging image and video models will be a lot more disruptive. Text has been easy to fake for a while now, and barely gets people's attention anymore.
If we plot all of these on a scale of how much it impacted day to day experience of an average user there is something highly unusual about AI. The slop is everywhere, every single person who is interacting with digital media is affected. I don't really know what this means, but this is pretty unusual when compared with other fads.
Current humans can't even deal with very simple and obvious issue of global warming. Thus it seems very unreasonable to expect any effective dealing with significantly more complex issues. And thus if not evolution then at least very accelerated adaptation is in order.
0: https://deviantabstraction.com/2025/09/29/against-the-tech-i...
This reminds me of when everyone was saying that "everything on the internet is written in ink" - especially during the height of social media in the 2010s. So imagine my surprise in the first half of the 2020s when tons of content starts getting effectively deleted from the internet - either through actual deletion or things like link rot. Heck, I literally just said "the height of social media" - even that has pulled back.
So yeah, remember that tech ultimately serves people. And it only happens so long as people are willing to enable it to happen.
> I find my fear to be kind of an ironic twist on what the Matrix foresaw—the AI apocalypse we really should be worried about is one in which humans live in the real world, but with thoughts and feelings generated solely by machines. The images we see, the words we read, all generated with the intent to control us. With improved VR, the next step (out of the real world) doesn’t seem very far away, either.
The return to that world will be very painful and chaotic however.
I think a large portion of the population actively distrust experts.
It’s always been this way. That the you thought otherwise is just evidence of how good a central power was at controlling “the truth”.
Trust doesn’t scale. There are methods that work better than others, but it’s a very hard problem.
The "and their handlers" part is the part I find frightening. I would actually be less concerned if the AIs were autonomous.
Reminds me of a random podcast I heard once where someone was asked: "if you woke up in the middle of the night and saw either a random guy or a grey alien in your bedroom, which would scare you more?" The person being interviewed said the dude, and I 100% agree. AI as proxy for oligarchs is much scarier than autonomous alien AI.
Generic "content" is that which fills out the space between the advertisements. That's never been good for you, whether written by humans or matrix multiplication.
Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.
and rejecting manipulation from a deontological stance reduces agency and output for doing good in the real world
manipulation = campaigns = advertisements = psyops (all the same, different connotations)
Those arguments looked incredibly weak and stupid when they were making them, and they look even stupider now.
And this isn't even their biggest error, which, in my opinion, was classifying AI as a bigger existential risk than climate change.
An entire generation of putatively intelligent people lost in their own nightmares, who, through their work, have given birth to chaos.
Human extinction won't happen until a couple years later, with stronger ai (if it does happen, which I unfortunately think it will- if we remain on our current trajectory)
Neat, go write science fiction.
Hundreds of billions of dollars are currently being lit on fire to deploy AI datacenters while there's an ecosystem destabilizing heat wave in the ocean. Climate change is a real, measurable, present threat to human civilization. "Strong AI" is something made up by a fan fiction author. Grow up.
The thing to ask yourself: does what I'm reading provide any value to me? If it does, then what difference does it make where it comes from.
You're absolutely right!
But seriously, if you don't know that it's incorrect information, it does make a difference. Knowing it was produced by AI at least gives you foreknowledge that it may include hallucinations.
Bless the author's heart.
All the major social media apps have been doing machine learning-driven getNext() for years now. Well before LLMs were even a thing. The Youtube algorithm was doing this a decade ago. This isn't on the horizon, we've already drowned in it.
Most of the content is basically Idiocracy's "Ow my balls".
A woman in front of me had her phone cradled in both hands, with index and thumb from both hands on the screen - one hand was scrolling and swiping and the other one was tapping the like and other interaction buttons. It was at such a speed that she would seemingly look at two consecutive posts in 1 second and then be able to like or comment within an additional second.
It left me really shaken as to what the actual interaction experience is like if you’re trying to consume short form content but you’re only seeing the first second before you move on.
It explains a lot about how thumbnails and screenshots and beginnings of videos have evolved overtime in order to basically punch you right in the face with what they want you to know.
It’s really quite shocking the extent to which we’re at the lowest possible common denominator for attention and interaction.
People have been manipulated since forever, and coerced before that. You used to be burned or hanged if your opinions differed even a little from orthodoxy (and orthodoxy could change in a span of a couple of years!)
AI slop is mostly noise. It doesn't manipulate, it makes thinking a little more difficult. But so did TV.
There was/is a relatively small amount of channels you have access to, and effectively all your neighbours and friends have the same content.
Short form video took this to the extreme by figuring out what specific content you like and just feed you that - as a result people spend significantly more time watching TikTok and Youtube than they (or their previous generation) did with TV. TV was also often in background, not really actively watched, which is not the case on the internet.
Now, once you put AI generated content there combined with AI recommendation systems, this problem becomes even worse - more content, faster feedback loop, infinite amount of "creators" tailored to what your sweet spot is.
There's an old fable about this, The Boy Who Cried "Wolf" about people adapting to false claims. They just discount the source, which is what is going to happen with social media once it is dominated by AI slop. Nobody will find it worth anything anymore, and the empires will melt down. I'm not on any of the big social sites, but I'm already watching a lot less on YouTube, basically only watching channels that I know to be real people. My other recommendations are mostly AI garbage now, outside of that.
Sorry, but when you make claims like this, it just tells me that you are not very familiar with popular culture. Most people hate AI content and at best find it a meme-esque joke. And young people increasingly get their news from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI) as possible in order to get followers. Platforms like YouTube do not benefit from their library being entirely composed of AI slop, and so will be implementing ways to filter AI content from "real people" content.
Ultimately AI tools are mostly going to be useful in situations where the author doesn't matter: sports scores, stock headlines, etc. Everything else will likely be out-competed by actual humans being human.
I think you're overgeneralising here. People don't hate AI content. Just content so low quality that they recognise it as AI. This is not universal and the recognition will drop further: https://journals.sagepub.com/doi/10.1177/09567976231207095
> from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI
AI content can be just as unique. It's not all-or-nothing. People can inject a specific style and direction in an otherwise generated content to keep it on brand.
At best you’re going to get some generically anonymous bot pretending to be human, that has limited reach because they don’t actually exist in the real world. Much of the media influence game involves podcasts, events, interviews, and a host of other things that can’t be faked.
Really? Because I still see blatantly obvious AI-generated results in web searches all the time.
codr7•1h ago
The people involved in making these decisions deserve to be locked up for life, and I'm sure they will be eventually.
joshgree8859•58m ago
ttctciyf•55m ago
MaxfordAndSons•50m ago
ryandrake•46m ago
alaithea•3m ago
graydot•55m ago
AstroBen•41m ago
jplusequalt•37m ago
The majority of people only have access to proprietary models, whose weights and training are closed source. The prospect of a populace that all out source their thinking to Google's LLM is horrifying.
AstroBen•21m ago
jplusequalt•39m ago