For the negative response I've heard, maybe 5% of it is due to actual performance of the LLM. They've run into substandard or unpredictable or unhelpful responses.
The other 95% is squarely due to deployment. It's the heavy-handed, pushy, obnoxious, deceitful, non-consensual, creepy coercion that platforms use to subvert you into their AI glue traps.¹
In short, biggest tech has turned AI of every quality into unwanted foistware.
From the article:
We’re angry that we don’t have choices to use AI.
Companies are shoving it into our video calls, email software,
digital assistants, shopping websites and our Google search results.
Some corporate bosses demand their workers use AI or else.
With other new waves of technology, such as smartphones and
social media, “you had to opt in,” said Yam. “Now there’s a lot
of ambient exposure to AI that I don’t necessarily choose.”
Even Harbath, who uses AI fairly enthusiastically, felt angry
when a publishing company ran her book manuscript through AI
software to identify repetition in her writing and to help
identify effective marketing strategies.
The feedback was helpful, but it took Harbath time to realize
why she was mad: She wasn’t told AI was going to be used in this way,
and she had no information about it.
And she said “both things can be true” — you can want AI
to help you and resent when it’s used in ways that you
don’t want or expect.
Elsewhere, CoPilot appears around every 3rd corner, ceaselessly trying to insert itself between you and your family pics or work docs. I think this is why MS CP has such a low adoption rate.
At some point you might notice that AI pushers and predator boyfriends are driven by the same compulsions - domination and control.
al_borland•3h ago
Do people want to live in a world where they can't trust anything they didn't personally see with their own eyes?