--- When we launched a month ago, we thought we had the right approach: a "one-shot" agent where you give it a high-level task like "order toothpaste from Amazon," and it would figure out the plan and execute it.
But we quickly ran into a problem that we've been struggling with ever since: the user experience was completely hit-or-miss. Sometimes agent worked like magic, but other times the agent would get stuck, generate a wrong plan, or just wander off course. It wasn't reliable enough for anyone to trust it.
This forced us to go back to the drawing board and question the UX. We spent the last few weeks experimenting with three different ways a user could build an agent:
A) Drag-and-drop workflows: Similar to tools like n8n. This approach creates very reliable agents, but we found that the interface felt complex and intimidating for new users. One tester (my wife) said: "This is more work than just doing the task myself." Building a simple workflow took 20+ minutes of configuration.
B) The "one-shot" agents: This was our starting point. You give the agent a high-level goal and it does the rest. It feels magical when it works, but it's brittle, and smaller local models really struggle to create good plans on their own.
C) Plan-follower agents: A middle ground where a human provides a simple, high-level plan in natural language, and the LLM executes each step. The LLM doesn't have to plan; it just has to follow instructions, like a junior employee.
--- After building and trying all three, we've landed on C) as the best trade-off between reliability and ease of use. Here's the demo https://youtu.be/ulTjRMCGJzQ
For example, instead of just saying "order toothpaste," the user provides a simple plan:
1. Navigate to Amazon
2. Search for Sensodyne toothpaste
3. Select 1 pack of Sensodyne toothpaste from the results
4. Add the selected toothpaste to the cart
5. Proceed to checkout
6. Verify that there is only one item in the cart. If there is more than one item, alert me
7. Finally place the order
With this guidance, our success rate jumped from 30% to ~80%, even with local models. The trade-off: users spend 30 seconds writing a plan instead of just stating a goal. But they get reliability in return. Note that our agent builder gives a good starting plan, and then the user has to just edit/customize it.
--- You can try out our agent builder and let us know what you think. We're big proponents of privacy, so we have first-class support for local LLMs. You can try GPT-OSS via Ollama or LMStudio and it works great!
I'll be hanging around here most of the day, happy to answer any questions!
shadowfax92•2h ago
felarof•1h ago