I agree that the approach shouldn’t be done unsupervised but I can imagine it being useful to gain valuable insights for improving the product before real users even interact with it.
But this is completely worthless or even misleading. There is zero value in this kind of "feedback". It will produce nonsense which sounds believable. You need to talk to real users of your software.
Why not show some actual examples of these agents actually doing what you describe. How exactly would you set up an agent to simulate a user?
agents/
|-- customer-expert.md - validates problem assumptions, customer reality
|-- design-lead.md - shapes solution concepts, ensures UX quality
|-- growth-expert.md - competitive landscape, positioning, distribution
|-- technical-expert.md - assesses feasibility, identifies technical risks
|-- decider-advisor.md - synthesizes perspectives, executive analysisFtfy. You might as well toss a coin.
They don't.
Basically this sounds like Agentic Fuzz Testing. Could it be useful? Sure. Does it have anything to do with what real users need or want? Nope.
crabmusket•1mo ago
Nothing beats feedback from humans, and there's no way around the painstaking effort of customer development to understand how to satisfy their needs using software.
bahmboo•1mo ago
Perhaps the idea is to use an LLM to emulate users such that some user-based problems can be detected early.
It is very frustrating to ship a product and have a product show stopper right out of the gate that was missed by everyone on the team. It is also sometimes difficult to get accurate feedback from an early user group.