The idea: instead of sending surveys or running A/B tests, what if marketers could ask questions directly to an AI twin of their ideal customer — built from real data like LinkedIn profiles, CRM notes, and behavioral insights?
Each twin captures that customer’s role, pain points, buying triggers, and communication style. You can then ask:
“Would this headline make sense to you?”
“Why would you hesitate to book a demo?”
“What would make this offer more relevant?”
Under the hood:
LLMs + embedding models fine-tuned on buyer language
Real-world inputs (LinkedIn data, optional CSV uploads)
Lightweight feedback layer to validate responses
70+ beta testers are using it to test messaging and GTM ideas before launch.
Would love feedback from HN:
How might you improve the data ingestion layer? How can I simulate a focus group? How can i combine data to create a digital twin of a post like VP of Marketing (broad as some users are demanding not testing with just one profile but a combination of atleast 10)?
Any ideas to make the twin modeling more reliable over time?
Free beta: https://resonax.ai
magnumgupta•1h ago
resonaX•1h ago
On hallucinations and bias: We handle it in three ways right now —
Grounding in real data: Each twin is built using structured + unstructured data (LinkedIn profiles, CRM notes, messaging, etc.), so the LLM has contextual grounding rather than free-form guessing.
Feedback calibration: Every time users compare twin feedback with real user insights (e.g., call transcripts or campaign results), that feedback loop fine-tunes how the twin weighs language patterns and priorities.
Cross-model validation: We run prompts through multiple models and look for consensus — if the outputs diverge too much, the system flags it for review rather than showing one “confident” but wrong answer.
It’s still early, but the goal is to make twins that drift with real customer data — not just sit frozen like static personas.