So, I built HolyShift: AI agents that validate product ideas by talking to real people on Reddit, HN, X, and LinkedIn … then generate a detailed GTM and “Should we build this?” report.
No synthetic data (ChatGPT). No predictions. Only real conversations from real people.
What it does • Posts platform-native questions (where allowed) • Collects real reactions, objections, pricing signals • Clusters feedback into themes (pain, demand, adoption, pricing …) • Runs a monitoring agent for sentiment analysis • Produces a short validation report (PRD + GTM)
All actions are rate limited and reviewed by a human for compliance.
How it works (technicals) • Multi-agent pipeline (intake → landscape → engagement → monitoring → synthesis → report) • Platform specific prompting (HN vs Reddit vs LinkedIn …) • Real-time sentiment + clustering via embeddings
Link https://www.holyshift.ai (Early beta)
What I’m looking for • What should stay human vs automated? Should we automate this 100%? • How do you do your product validation? Do you talk to your potential users (and who?) before you build?
Happy to answer anything.
lovrok23•29m ago
how do you constrain the agent to stick strictly to the facts of the product hypothesis without making stuff up to please the potential customer?
Matzalar•18m ago