Like, I have a hard time even strawmanning such a position. Of course people armed with highly persuasive AIs will task those AIs into doing what is best for the people who own the AIs and the odds of that happening to line up with your own interests are fairly low. What the hell else are they going to do with them?
But then, keep gaming it out. This particular thing isn't exactly completely new. We've seen bumps in persuasiveness before. Go back and watch a commercial from the 1950s. It's hard to believe it would have done a darned thing, but at the time people had not yet had to develop defenses against it.
We had to develop defenses. We're going to have to develop more.
What I forsee as the endgame is not that we become mindless robots completely run by AIs... or, at least, not all of us... but that we enter into a world where we simply can't trust anything. Anywhere. At all. How does any being, human or otherwise, function in an infosphere where an exponentially large proportion of it is nothing but persuasion attempts, hand-crafted by the moral equivalent of a team of PhDs personally dedicated to controlling me? Obviously humanity has never had a golden age where you could truly just trust something you heard, and we can argue about the many and sundry collective failures to judge the veracity of various claims, but it's still going to be a qualitative change when our TV programs are designed by AIs to persuade us, and the web is basically custom assembling itself in front of us to persuade us, and all of our books are AI-generated to persuade us, and literally no stone is left unturned in the ever-escalating goal to use every available channel to change our behavior to someone else's benefit.
When does trying to become informed become an inevitable net negative because you're literally better off knowing nothing?
What happens when the bulk of society finally realizes we've hit that point?
They don’t profit from it (well at least the current doom crowd), they have been saying that this is super-risky since “forever” (15 years), they strongly argue against e.g. OpenAI going private … I really don’t understand where this whole strand of thinking of hidden ulteriour motives for AI doomers comes from, AFAIK there was never a single clear case where this happened (and the arc of people invested in AI is they they become less of a doomer the more financially entangled they become with AI, see Elon or Altman).
One thought path leads to business-as-usual comfort, happily motivated by trillions of $ in market cap and VC.
The other is dread.
It makes sense why people take the easy road. We're still battling climate change denialism after all.
The subtext isn't "our product has bad side-effects and can be avoided with known alternatives", but more like: "Our product's core benefit is too awesome at what it does; If we don't pursue it then someone less-enlightened will do it anyway; Whether you love it or hate it the only way to influence the outcome is to support us."
So the oil-executive version would be something like "worrying" that petrochemical success will quickly bankrupt every other power-generation or energy-storage system overnight, causing economic disruption, and that eventually it will make mankind too satisfied and too comfortable so that it falls into existential torpor... But at least DinoCo™ has a What If It's Too Awesome working group to study the issue while there's still time before our inevitable corporate win of All The Things.
But then you have the problem: If you won't update your priors, and neither will someone else, but they have different priors than you, how can you talk to them?
But I'm maybe a bit less cynical than you. I think (maybe I'm kidding myself) that I can to some degree detect... something.
If someone built a parallel universe of false journal articles written by false experts, and then false news articles that referred to the false journal articles, and then false or sockpuppet users to point people to the news articles, that would be very hard to detect if it was done well. (A very persistent investigator might realize that the false experts either came from non-existent universities, or from universities that denied that the experts existed.) But often it isn't done well at all. Often it's "hey, here's this inadequately supported thing that disagrees with your priors that everybody is hyping". For me, "solidly disagrees with my solidly-held priors" is enough to give it a very skeptical look, and that's often enough to turn up the "inadequately supported" part. So I can at least sometimes detect something that looks like this, and avoid listening and believing it.
I'm hoping that that's enough to avoid "When does trying to become informed become an inevitable net negative because you're literally better off knowing nothing?" But we shall see.
In the current world, and the world for the next few years, the amount of human and CPU time that can be aimed at me personally is still low enough that what "real reality" generates outweighs the targeted content, and even the targeted content is clearly more accurately modeled by a lot of semi-smart people just throwing stuff at the wall and hoping to hook "someone" who may not be me personally. We talk about PhDs getting kids to click ads, and there's some truth to that, but at least there isn't anything like a human-brain-equivalent dedicated to getting my kids, personally, to click on ads. I have a lot of distrust of a lot of things but at least I can attack the content with the fact that needing to appeal broadly still keeps the content somewhat grounded in some sort of reality.
But over time, my personal brainpower isn't going to go up but the amount of firepower aimed directly at me is.
The good news is that it probably won't be unitary, just as the targeting today isn't unitary. But I'd like something better than that. And playing them against each other gets harder when the targeting becomes aware of that impact and they start compensating for that, because now they have the firepower to aim at me personally and do that sort of compensation.
And if you don't think this is the inevitable outcome, note how every social media platform has, slowly or quickly, gravitated toward maximizing engagement to the exclusion of all other priorities. Why would AI be any different?
amradio1989•10h ago
So here we are.
luckylion•10h ago
MattGrommes•9h ago
I think animal companions are a different class than chatbots since they're not trying to be people so I make no comment on those.
luckylion•8h ago
Why do they have to "do the work" to be deserving of companionship when most of us don't have to do anything because it comes natural to us and we can relatively easily regulate the amount of companionship we want.
I fail to see the bad thing. For some people it's either a chatbot (or a dog) or no interaction at all. Should people starve instead of eating at McDonald's because that's "not real food"?
MattGrommes•37m ago
chowells•7h ago
My cat will harass me if I'm on my computer after midnight. It's time to put the technology away and lie down where she can keep an eye on me. She's quite clear on this point. This is an entire category of interaction not available to chatbots. There is a difference in level of reality.
And when lacking human companionship, grounding to reality is really important. You've got to get out of your head sometimes.
amradio1989•3h ago
If you want a really hot take: ai chatbot companions are just an evolution of pets. They are a vaguely life affirming substitute created to medicate human loneliness, for a fee of course.
bagrow•9h ago
- Kurt Vonnegut.
Der_Einzige•9h ago
zemvpferreira•9h ago
throwthatway46•9h ago
Dogs aren't people, but being with a dog is way better than being chronically alone. They can be training wheels to rejoining society.
svieira•9h ago
bloqs•9h ago
agonmon•3h ago
raincole•9h ago
What does it mean lol. If there is a button to make people find human companions the ruling class would press it so hard just to raise birthrate (= more working class).
add-sub-mul-div•9h ago
amradio1989•4h ago
Its similar to how a controlling boyfriend/girlfriend will isolate you from your friends and family first. You are much easier to control that way. You stay "compliant".
This is much harder to see in democratic nations. The strategy in America has largely been controlling public discourse to the point where we self-censor.
AIPedant•8h ago
I am not opposed to chatbots for people who are so severely disabled that they can’t take care of cats, e.g. dementia. But otherwise AI companions are akin to friendship as narcotics are akin to happiness: a highly pleasant (but profoundly unhealthy) substitute.
jjmarr•8h ago
handfuloflight•7h ago
So just system prompt in non-spineless characteristics into the AI.
jononor•5h ago
gonzobonzo•5h ago
On this point, pets are a lot closer to chatbots than to humans. You buy them, you have ownership of them, and they've literally been bred so that their genetics makes it easy for them to grow attached to you and see you as a leader (while their brethren who haven't had their genes changed by humans don't do this). It's normal for people to use their complete control over every aspect of their life to train them in this way as well.
Your pet literally doesn't have the ability to leave you on its own. Ever.
derektank•3h ago
This is true of human beings as well tbf
gonzobonzo•2h ago
gonzobonzo•5h ago
The truth is, just about everyone is using some sort of a substitute for real friends at this point.
amradio1989•3h ago