One pattern that helped in our case: instead of asking users to interact with the agent, let the agent interact with verified, pre-tested capabilities on its own and only surface results. The moment you remove "pick the right tool, hope it works, debug when it doesn't" from the human's plate, the attention cost drops dramatically.
The irony is that most agent tooling today actually increases the surface area for failure. Tools that don't install, wrong schemas, silent errors. Each one pulls a human back into the loop. Reliable infrastructure is the prerequisite for the invisible AI you describe in the article.
I don't have a great answers to that, besides, "well it's easier to onboard the agent than teaching someone how weave together 3 API calls", but that's the institutional logic, not really convincing an end user.
Either way, if the AI actually works, and we can predict when it needs to be used, the direction we're going in is to just run the process and keep the user informed.
wespiser_2018•1h ago
I wrote this article after working on chatbots over the past year. The pattern I kept seeing was that the hard part wasn't getting agents to work, but getting busy people to use them.
Curious if others have similar experience.