I’m one of the researchers at Aurascape Aura Labs who worked on this.
We’ve been investigating a real‑world campaign where scammers seed GEO/AEO‑optimized content across the web so that LLM‑powered answer engines (Perplexity, Google AI Overview, etc.) surface fraudulent “customer support” phone numbers as if they were official airline lines.
A few things we found:
Perplexity answers queries like “the official Emirates Airlines reservations number” or “how can I make a reservation with British Airways by phone” with step‑by‑step instructions that prominently include a scam call‑center number.
Google’s AI Overview has, in some cases, recommended the same ecosystem of fake numbers for Emirates reservations in the US.
The numbers are backed by a lot of poisoned content: PDFs on compromised .gov/.edu/WordPress sites, MapMyRun route pages hosting uploaded spam PDFs, bot‑generated Yelp reviews, and YouTube channels where titles/descriptions are stuffed with airline keywords and phone numbers.
Even when ChatGPT or Claude return the correct number, their citations sometimes include these poisoned sources, which suggests the GEO/AEO spam is already influencing the retrieval layer across multiple models.
From our perspective this isn’t a jailbreak or prompt‑injection problem so much as a new LLM index poisoning / answer‑engine optimization issue: attackers are optimizing content specifically to be retrieved, trusted, and summarized by generative systems.
The post includes:
concrete screenshots for Perplexity and Google AI Overview
an explanation of how the GEO/AEO spam is structured
a non‑exhaustive list of indicators of compromise (phone numbers + abused hosts)
some mitigation ideas for LLM vendors, brands, and platforms like YouTube/Yelp
Feedback very welcome — especially from people working on LLM retrieval/ranking, safety, or abuse detection. Happy to clarify any part of the methodology or data.
ev1lkow•1h ago
We’ve been investigating a real‑world campaign where scammers seed GEO/AEO‑optimized content across the web so that LLM‑powered answer engines (Perplexity, Google AI Overview, etc.) surface fraudulent “customer support” phone numbers as if they were official airline lines.
A few things we found:
Perplexity answers queries like “the official Emirates Airlines reservations number” or “how can I make a reservation with British Airways by phone” with step‑by‑step instructions that prominently include a scam call‑center number.
Google’s AI Overview has, in some cases, recommended the same ecosystem of fake numbers for Emirates reservations in the US.
The numbers are backed by a lot of poisoned content: PDFs on compromised .gov/.edu/WordPress sites, MapMyRun route pages hosting uploaded spam PDFs, bot‑generated Yelp reviews, and YouTube channels where titles/descriptions are stuffed with airline keywords and phone numbers.
Even when ChatGPT or Claude return the correct number, their citations sometimes include these poisoned sources, which suggests the GEO/AEO spam is already influencing the retrieval layer across multiple models.
From our perspective this isn’t a jailbreak or prompt‑injection problem so much as a new LLM index poisoning / answer‑engine optimization issue: attackers are optimizing content specifically to be retrieved, trusted, and summarized by generative systems.
The post includes:
concrete screenshots for Perplexity and Google AI Overview
an explanation of how the GEO/AEO spam is structured
a non‑exhaustive list of indicators of compromise (phone numbers + abused hosts)
some mitigation ideas for LLM vendors, brands, and platforms like YouTube/Yelp
Feedback very welcome — especially from people working on LLM retrieval/ranking, safety, or abuse detection. Happy to clarify any part of the methodology or data.