We've spent the last 3 months doing demo of our agent to two groups of people: software engineers and knowledge workers.
The problem we are facing is not that our agent does not perform, but rather how knowledge workers do their job.
Engineers almost always get the agent to finish the job on the first or second try, because they use structure and detailed instruction in their prompting.
When we demo knowledge workers on the other hand we see them prompting 1-2 sentences and expect the agent to fill in the context, to understand their intent, position they are in. I mean the agent delivers, but I'm not sure the expectations are met.
So the problem we are facing right now is how do we teach knowledge workers to structure their prompts like an engineer would define a task before they go ahead and start work.
I think the whole agents industry is facing this issue and our question is how do we teach them to put effort into defining their tasks?
ViktorPetrov•1h ago
The problem we are facing is not that our agent does not perform, but rather how knowledge workers do their job.
Engineers almost always get the agent to finish the job on the first or second try, because they use structure and detailed instruction in their prompting.
When we demo knowledge workers on the other hand we see them prompting 1-2 sentences and expect the agent to fill in the context, to understand their intent, position they are in. I mean the agent delivers, but I'm not sure the expectations are met.
So the problem we are facing right now is how do we teach knowledge workers to structure their prompts like an engineer would define a task before they go ahead and start work.
I think the whole agents industry is facing this issue and our question is how do we teach them to put effort into defining their tasks?