But it still doesn't talk to users.
I built a small MCP server that lets AI agents run real user interviews and return structured insights (themes + quotes).
But it still doesn't talk to users.
I built a small MCP server that lets AI agents run real user interviews and return structured insights (themes + quotes).
Aaaand.... what are "voice real user interviews"?
Most agent workflows today optimize build execution (write code, deploy, run tests).
But the discovery loop is still mostly human.
Humans usually do: hypothesis → talk to users → adjust → build.
Feels plausible that agents could eventually run that whole loop themselves.
If that happens, what comes next?
jtccc•7h ago
Agent prompt: “Why are users confused about onboarding?”
1. create_study → generates interview link
2. share with users
3. get_study_results → returns themes + quotes
Now the agent has real qualitative signal instead of relying on its own assumptions or humans.
The next step would be wiring this to product analytics so agents can trigger research from real product signals, then feed insights back into what gets built.
Curious if agents will mostly rely on synthetic users, or still need real conversations to learn not just how to build, but what to build.