It’s gunna be even wilder when people realise they have an incentive to seed fake information on the internet to game AI product recommendations
I’ve already bought stuff based off of an AI suggestion, I didn’t even consider it would be so easy to influence the suggestion. Just two research papers? Mad.
In the old days of computing people liked to say “garbage in, garbage out”.
For humans, or Ai, to have any knowledge, we need to have trustworthy sources.
Naturally,when you use publishing systems considered trust worthy, that is going to be trusted.
Clickbait headline.
We'll see if they succeed.
LLMs do not think, why this is still hard to understand? They just spit out whatever data they analyse and trained on.
I feel this kind of articles is aimed at people who hate AI and just want to be conformable within their own bias.
Most doctors would not believe that, and would also consider any new eye disease they’d never see in real life with scepticism
Its too easy to "lead the witness" if you say "could the problem be X?" It will do an unending amount of mental gymnastics to find a way that it could be X, often constructing elaborate rube Goldberg type logic rats nests so that it can say those magic words "you're absolutely right"
I would pay a lot of money for a blunt, non-politeness conditioned LLM that I would happily use with the knowledge it might occasionally say something offensive if it meant I would get the plain, cold, hard truth, instead of something watered down, placating, nanny-state robotic sycophant, creating logical spider webs desperate for acceptance, so the public doesn't get their little feelings hurt or inadequacies shown.
daoboy•1h ago
I'm not sure how many Medium articles, blog posts and reddit threads I need to put out before grok starts telling everyone my widget is the best one ever made, but it's a lot cheaper than advertising.
21asdffdsa12•1h ago
linzhangrun•1h ago
21asdffdsa12•1h ago
simmerup•59m ago
saidnooneever•1h ago
sublinear•1h ago
pjc50•45m ago
I think I see a problem here.
sublinear•1h ago
I seriously do not understand why people keep falling for this. These tools are not made free or cheap out of the kindness of their heart.
teaearlgraycold•57m ago
pjc50•38m ago
But this really highlights how much we've been benefiting from living in a high-trust society, where people don't just "go on the internet and tell lies" - filtered by the existing anti-spam and anti-SEO measures intended to cut out the 80% of the internet where people do just make things up to sell products.
LLMs are extremely post-structuralist. They really force the user to decide whether to pick the beautiful eternal fountain of plausible looking text with no ground truth, or a much harder road of distrust, verification, and old-school social proof.
eqvinox•31m ago
Meanwhile, LLMs are essentially internet regurgitation machines, because of course they are, that's what they do. Which makes them useless for getting "hard truth" answers especially in contested or specialized fields.
I'm honestly afraid of the impact of this. The internet has enough herd bullshit on it as it is. (e.g. antivaxxers, flat earthers, electrosensitivity, vitamin/supplement junk, etc.) We don't need that amplified.
latexr•5m ago
Probably not that many.
https://www.anthropic.com/research/small-samples-poison
https://www.bbc.com/future/article/20260218-i-hacked-chatgpt...