Back in September 2024 I named a whale "Teresa T" with just a blog entry and a YouTube video caption: https://simonwillison.net/2024/Sep/8/teresa-t-whale-pillar-p...
(For a few glorious weeks if you asked any search-enabled LLM, including Google search previews, for the name of the whale in the Half Moon Bay harbor it confidently replied Teresa T)
It's a demonstration. If a domain name and a quick bit of Wikipedia vandalism is all it takes to make an LLM start spouting nonsense about a "surprisingly serious tournament circuit" or a "massive online community" for an obscure card game, consider what an unscrupulous PR team or a political operative could do to influence its output on more important topics.
‘is doing’.
We can easily look ahead a few years and see how people will rely on the LLMs to be a source of truth in the same way people looked at Google that way, or newspapers.
Rewriting history has been happening for a while, and with LLMs being the one-stop shop for guidance and truth, the rewrite will be complete.
Doubly so since most people see these things as artificial intelligence, and soon to be superintelligence...so how can they be wrong?
Doesn't help that AI media literacy is so primitive compared to how intelligent the models are generally. We're in a marginally better place than we were back when chatbots didn't cite anything at all, but duplicated Wikipedia citations back to a single source about a supposedly global event is just embarrassing. By default, I feel citations and epistemological qualifications should be explicit, front-and-center, and subject to introspection, not implicit and confined to tiny little opaque buttons as an afterthought.
You can expect the spicy autocomplete to feed you flattering bullshit. It may cite Wikipedia (it shouldn't), but you should go check out those citations, and validate the claims yourself. It's the least you can do.
And if the cited source is Wikipedia... check Wikipedia's sources too. Wikipedians try their best to provide you with reliable sources for the claims in their articles (oh who am I trying to kid? They pick their favourite sources that affirm their beliefs, and contending editors remove them for no good reason, and eventually the only thing that accrues is things that the factions agree on, or at least what ArbCom has demanded they stop fighting over).
I guess what I'm trying to say is: don't rely on that authoritative-sounding tone that Wikipedia uses (or that AI bots use, or that I'm using right now). It's a rhetorical trick that short-circuits your reasoning. Verify claims with care.
Also check the Talk page, you often find all kinds of shenanigans called out there.
(Norm Macdonald voice) Or so the Germans would have us believe...!
It's almost like he was a better Chuck Norris than Chuck Norris. By his own ... testimony ...
"AI told me that..."
In the old days, it would have been "I read on Google..."
This is sort of why "brand" matters; it provides a source of trust.
Encyclopedia Britannica used to be that source of 'facts'. Then it became whatever page-rank told you. Eventually SEO optimization ruined that.
News stories are the same thing. For certain groups, they have their 'independent' publication whose reporting they trust.
So this means that for bad actors it's more efficient to manufacture brand new fake stories instead of trying to distort the real ones. Don't produce fake articles absolving yourself of a crime, instead produce fake articles accusing your opponent of 100 different things. Then people will fact-check the accusations using LLMs, and since all the sources mentioning those accusations are controlled by you, the LLMs will confirm them.
amarant•1h ago
Even being on stoner.com,I read that as meaning something different from what was meant.
Op has a great surname!