I am sharing a small open artifact for researchers looking at AI mediated discovery. It is a standardized prompt pack for testing visibility stability, preference anchoring, and challenger substitution across major assistants.
This is not a leaderboard and not a ranking tool. It is a diagnostic input set so that testing can be repeatable rather than ad hoc. The focus is on model mediated recommendation surfaces, not accuracy or safety.
The pack includes:
Ten category discovery prompts
Ten user preference continuation prompts
Ten substitution stress prompts
Logging guidance for reproducibility
Suggested variance interpretation ranges
Use case examples:
Weekly drift checks after model updates
Measuring whether user preferences are preserved
Observing when and how challenger brands appear
Research on model volatility in recommendation settings
I would appreciate community feedback. If the prompt set is incomplete or biased, happy to edit and version it. Contributions encouraged, especially around domain specific prompt modules and non English variants.
PDF: https://acrobat.adobe.com/id/urn:aaid:sc:eu:45de8fc5-97e8-47...
I will add a GitHub repo and CSV log template once feedback lands.
Purpose is simple: make visibility and substitution behavior measurable. Feel free to critique, fork, or improve.
businessmate•7h ago