Recommendations in the form of ads targeted to humans is a perfect usecase for LLMs. There is no right answer, and the interpreter is a human. Hallucinations dont matter, and if the targeting is a tad bit better, that justifies the investment.
Am I supposed to ask for permission when I talk to my friends about how attractive I think people of the opposite sex are?
I'm confused how you would even come to compare these two scenarios?
Many social media users are gullible enough to be convinced to act irrationally.
A well-trained AI on great data (which Facebook has) is pretty good at sorting signal from noise, i.e. advertisements that would or would not appeal to a particular user.
This post is on to something.
belter•4h ago
SirFatty•3h ago
zingababba•3h ago