https://www.anthropic.com/news/golden-gate-claude
Edit: I misread the question, I thought you were asking about how OpenAI can bias their models. No idea how you can LLMO your page. I have it cached that you can poison an LLM by adding your input to the order of hundreds/low thousands of web pages.
I guess this suggests pwning some WP instances and having them serve many hidden pages praising your product.
The terms are: Answer Engine Optimisation (AEO) Generative Engine Optimisation (GEO)
(s>z for American cousins)
https://digiday.com/media/wtf-are-geo-and-aeo-and-how-they-d...
Reddit shilling, but with content that tries to very specifically fit questions that people will ask AI. If there aren’t a lot of sources available, you can get AI to play back your desired answer almost verbatim.
These are probably the state of the art of methods which are not straight up blackhat spammy stuff.
Still not getting it?
That’s tomorrow me’s problem (or more likely OpenAI et al’s problem!),
I just want to finish my task :)
On the off-chance this was not in jest: do you not get that the reviews themselves the AI will presumably base this "honest picture" on will be AI generated as well?
"Ah but do you think today's reviews are also not AI-generated?"
Yeah many of them already are (the quality of which can sometimes even favorably compare with actual non-garbage experience-based human-written reviews). Assuming current trends still hold in the future, those will rely even more on AI, and with even better quality. Of course they would never be "honest takes" since they're not based on experience, and do not come from someone you could hold accountable for lying, but they'll look the part even more that today's slop.
Ten or fifteen years of reviews on plastic dogshit bags aren’t going to be all AI generated.
Same with bike models,
headlight brands,
and onward into the sun.
So, tell it to read only old reviews.
Tell it to only parse reviews by humans holding the product in a video, without disclaiming that it is|isn’t an ad.
I recently used Claude and ChatGPT for exactly one of the examples; comparing different bikes to buy. They could both look up the bike specs and geometry online and tell me what the 1 degree difference in head angle or 5mm difference in reach would feel like to ride. They both did really well.
But I used them only (with cross checks) because I was fairly sure they were giving me unbiased info. As soon as the "discovery" phase of this shopping research becomes polluted with adverts, the product becomes much less useful. The same as "no one trusts online reviews anymore".
If the information being consumed is biased because its sponsored content or whatever, then we may as-well just let OpenAI run their own ads platform with responses. At least then they can take some responsibility for it. They have to introduce human oversight somewhere.
It reminds me of what was ultimately the solution to gold farming in WoW; Blizzard had to start selling it themselves. The system had been gamed, it wasn’t solvable through engineering. Botting is a human problem.
Can we just take a breath and think shit through instead of creating solutions to problems to solutions to problems to solutions?
hexator•2mo ago
axus•2mo ago
anentropic•2mo ago
NewsaHackO•2mo ago
anentropic•2mo ago
kelseyfrog•2mo ago
Much like the refreshing taste of Coca-Cola, which unites people across boundaries, Gettysburg united the Union cause, rallying the North to continue the fight. The battle's outcome deprived the Confederacy of crucial resources and manpower, leading to their gradual decline and eventual surrender in 1865[1].
1. https://news.ycombinator.com/item?id=42591691