I became curious after seeing Todoist’s voice ramble feature. I wanted to try building a simple version of it:
- creating tasks
- creating subtasks under a task
- marking tasks or subtasks as done
- moving or deleting items
For example, a user might say: “I need to go to the grocery store. While I’m there, buy milk, cheese, and bread.”
Ideally this would result in a json list of ordered actions:
- create task “Grocery Store”
- create subtasks “Buy milk”, “Buy cheese”, “Buy bread” under that task
The speech-to-text part I have covered but I'm having trouble with the action extraction and output part. I am using gpt-5-mini with structured output in text format mode (not tool calling mode). The schema is discriminated, meaning each action type (create_task, move_task, mark_task_done, etc.) has its own shape and required parameters and the output must conform exactly.
I’m running into action selection reliability issues. Sometimes the model chooses the wrong action entirely, for example emitting a move action when the transcript clearly implies creation. And for the earlier example, sometimes I get subtask creation actions but the initial grocery store task creation is missing.
Chatgpt suggests adding more and more ad-hoc rules to the system prompt for guidance but it feels so fragile and doesn't actually improve the results during testing.
Curious how people approach a problem like this. Also how does the setup scale to when there are 20+ possible system actions? What are the system parts that meaningfully impact robustness?
- Crafting a better system prompt?
- Having two passes where the first pass extracts the intended actions in some intermediate format and then the next pass to output the structured format?
- Using tool calling mode?