Instead of ad-hoc “SEO prompts”, RankLens uses structured entity-conditioned probes. Each probe is defined by a brand/site entity + intent, and we resample across many runs to reduce prompt noise and random LLM variance.
For each probe we track: – Explicit mention of your brand/site (Brand Match) – Precision of when you’re recommended as the answer (Brand Target) – How often competitors get recommended instead (Brand Appearance + share of voice)- - Likelihood of being recommended by the AI. (Brand Discovery) – A prominence / “confidence” score for how strongly the LLM backs that recommendation
We combine these into a visibility index so agencies and brands can: – See AI visibility trends over time – Compare engines (e.g., ChatGPT-style assistants vs. others) – Spot when they’re losing AI “mindshare” to specific competitors in regions/locale
Method & code – We open-sourced the entity/probe framework as RankLens Entities (code + configs): https://github.com/jim-seovendor/entity-probe – We also wrote an in-depth study, “Entity-Conditioned Probing with Resampling: Validity and Reliability for Measuring LLM Brand/Site Recommendations”: https://zenodo.org/records/17489350
I’d love HN feedback on: – Weak spots / blind spots in the entity-conditioned probing methodology – Better baselines or evaluation strategies you’d use to test validity & reliability – Any ways this could be gamed in practice (e.g., by changing site content or prompts) that we haven’t considered
Happy to go into implementation details (sampling design, resampling, scoring, engine differences, etc.) in the comments.