Schooling and mass media are expensive things to control. Surely reducing the cost of persuasion opens persuasion up to more players?
Sure the the Big companies have all the latest coolness. But also don't have a moat.
Sure AI could democratise content creation but distribution is still controlled by the elite. And content creation just got much cheaper for them.
We no longer live in the age of broadcast media, but of social networked media.
- elites already engage in mass persuasion, from media consensus to astroturfed thinktanks to controlling grants in academia
- total information capacity is capped, ie, people only have so much time and interest
- AI massively lowers the cost of content, allowing more people to produce it
Therefore, AI is likely to displace mass persuasion from current elites — particularly given public antipathy and the ability of AI to, eg, rapidly respond across the full spectrum to existing influence networks.
In much the same way podcasters displaced traditional mass media pundits.
Expensive to run, sure. But I don't see why they'd be expensive to control. Most UK are required to support collective worship of a "wholly or mainly of a broadly christian character"[0], and used to have Section 28[1] which was interpreted defensively in most places and made it difficult even discuss the topic in sex ed lessons or defend against homophobic bullying.
USA had the Hays Code[2], the FCC Song[3] is Eric Idle's response to being fined for swearing on radio. Here in Europe we keep hearing about US schools banning books for various reasons.
[0] https://assets.publishing.service.gov.uk/government/uploads/...
[1] https://en.wikipedia.org/wiki/Section_28
That was only for a short fraction of human history only lasting in the period between post-WW2 and before globalisation kicked into high gear, but people miss the fact that was only a short exception from the norm, basically a rounding error in terms of the length of human civilisation.
Now, society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression. Now the mechanisms by which that feudalist society is achieved today are different than in the past, but the underlying human framework of greed and consolidation of wealth and power is the same as it was 2000+ years ago, except now the games suck and the bread is mouldy.
The wealth inequality we have today, as bad as it is now, is as best as it will ever be moving forward. It's only gonna get worse each passing day. And despite all the political talks and promises on "fixing" wealth inequality, housing, etc, there's nothing to fix here, since the financial system is working as designed, this is a feature not a bug.
The word “always” is carrying a lot of weight here. This has really only been true for the last 10,000 years or so, since the introduction of agriculture. We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that. Given the magnitude of difference in timespan, I think it is safe to say that that is the “default setting”.
Only if you consider intra-group egalitarianism of tribal hunter gatherer societies. But tribes would constantly go to war with each other in search of expanding to better territories with more resources, and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
So you forgot that part that involved all the killing, enslavement and rape, but other than that, yes, the victorious tribes were quite egalitarian.
This isn’t an historical norm. The majority of human history occurred without these systems of domination, and getting people to play along has historically been so difficult that colonizers resort to eradicating native populations and starting over again. The technologies used to force people on the plantation have become more sophisticated, but in most of the world that has involved enfranchisement more than oppression; most of the world is tremendously better off today than it was even 20 years ago.
Mass surveillance and automated propaganda technologies pose a threat to this dynamic, but I won’t be worried until they have robotic door kickers. The bad guys are always going to be there, but it isn’t obvious that they are going to triumph.
Why?
As the saying goes, the people need bread and circuses. Delve too deeply and you risk another French Revolution. And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Feudalism only works when you give back enough power and resources to the layers below you. The king depends on his vassals to provide money and military services. Try to act like a tyrant, and you end up being forced to sign the Magna Carta.
We've already seen a healthcare CEO being executed in broad daylight. If wealth inequality continues to worsen, do you really believe that'll be the last one?
EDUCATION:
- Global literacy: 90% today vs 30%-35% in 1925
- Prinary enrollment: 90-95% today vs 40-50% in 1925
- Secondary enrollment: 75-80% today vs <10% in 1925
- Tertiary enrollment: 40-45% today vs <2% in 1925
- Gender gap: near parity today vs very high in 1925
HUNGER
Undernourished people: 735-800m people today (9-10% of population) vs 1.2 to 1.4 billion people in 1925 (55-60% of the population)
HOUSING
- quality: highest every today vs low in 1925
- affordability: worst in 100 years in many cities
COST OF LIVING:
Improved dramatically for most of the 20th century, but much of that progress reverse in the last 20 years. The cost of goods / stuff plummeted, but housing, health, and education became unaffordable compared to incomes.
imagine someday there is a child that trust chatgpt more than his mother
I trusted my mother when I was a teen; she believed in the occult, dowsing, crystal magic, homeopathy, bach flower remedies, etc., so I did too.
ChatGPT might have been an improvement, or made things much worse, depending on how sycophantic it was being.
What is AI if not a form of mass media
My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.
My fear is that some entity, say a State or ultra rich individual, can leverage enough AI compute to flood the internet with misinformation about whatever it is they want, and the ability to refute the misinformation manually will be overwhelmed, as will efforts to refute leveraging refutation bots so long as the other actor has more compute.
Imagine if the PRC did to your country what it does to Taiwan: completely flood your social media with subtly tuned han supremacist content in an effort to culturally imperialise us. AI could increase the firehose enough to majorly disrupt a larger country.
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
Then that doesn’t seem like a (counter) movement.
There are also many “grass roots movements” that I don’t like and it doesn’t make them “good” just because they’re “grass roots”.
It seems to me that it's easier than ever for someone to broadcast "niche" opinions and have them influence people, and actually having niche opinions is more acceptable than ever before.
The problem you should worry about is a growing lack of ideological coherence across the population, not the elites shaping mass preferences.
And that certainly means niches can flourish, the dream of the 90s.
But I think mass broadcasting is still available, if you can pay for it - troll armies, bots, ads etc. It's just much much harder to recognize and regulate.
(Why that matters to me I guess) Here in the UK with a first past the post electoral system, ideological coherence isn't necessary to turn niche opinion into state power - we're now looking at 25 percent being a winning vote share for a far-right party.
The content itself (whether niche or otherwise) is not that important for understanding the effectiveness. It's more about the volume of it, which is a function of compute resources of the actor.
I hope this problem continues to receive more visibility and hopefully some attention from policymakers who have done nothing about it. It's been over 5 years since we've discovered that multiple state actors have been doing this (first human run troll farms, mostly outsourced, and more recently LLMs).
Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.
Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.
I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.
As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. [1]
So at least on the model side it seems difficult to go against the real world.
But this is not new. The very goal of a nation is to dismantle inner structures, independent thought, communal groups etc across population and and ingest them as uniformed worker cells. Same as what happens when a whale swallows smaller animals. The structures will be dismantled.
The development level of a country is a good indicator of progress of this digestion of internal structures and removal of internal identities. More developed means deeper reach of the policy into people's lives, making each person as more individualistic, rather than family or community oriented.
Every new tech will be used by the state and businesses to speed up the digestion.
Romanian elections last year had to be repeated due to massive bot interference:
https://youth.europa.eu/news/how-romanias-presidential-elect...
All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.
So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well then the ai can far more easily street the child in whatever direction they want. Over a lifetime.
Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?
intermerda•41m ago
> Musk’s AI Bot Says He’s the Best at Drinking Pee and Giving Blow Jobs
> Grok has gotten a little too enthusiastic about praising Elon Musk.
andsoitis•35m ago
> “For the record, I am a fat retard,” he said.
> In a separate post, Musk quipped that “if I up my game a lot, the future AI might say ‘he was smart … for a human.’”
ben_w•26m ago
He's also claimed "I think I know more about manufacturing than anyone currently alive on Earth"…
spiderfarmer•20m ago
ahartmetz•15m ago
andsoitis•2m ago
You should know that ChatGPT agrees!
“Who on earth th knows the most about manufacturing, if you had to pick one individual”
Answer: ”If I had to pick one individual on Earth who likely knows the most—in breadth, depth, and lived experience—about modern manufacturing, there is a clear front-runner: Elon Musk.
Not because of fame, but because of what he has personally done in manufacturing, which is unique in modern history.“
- https://chatgpt.com/share/693152a8-c154-8009-8ecd-c21541ee9c...
lukan•21m ago
Hard to tell, I have never been surrounded by yes sayers all the time praising me for every fart I took, so I cannot relate to that situation (and don't really want to).
But the problem remains, he is in control of the "truth" of his AI, the other AI companies likewise - and they might be better at being subtle about it.