It feels like the most ideal model for an LLM provider is the one where user have to follow up with another prompt to clarify or improve the result and make it behave like this maybe 50% of the time to avoid excessive churn.
It feels like the most ideal model for an LLM provider is the one where user have to follow up with another prompt to clarify or improve the result and make it behave like this maybe 50% of the time to avoid excessive churn.