A single user action can trigger anywhere from a few to dozens of LLM calls (tool use, retries, reasoning steps), and with token-based pricing the cost can vary a lot.
How are builders here planning for this when pricing their SaaS?
Are you just padding margins, limiting usage, or building internal cost tracking? Also curious, would a service that offers predictable pricing for AI APIs (like a fixed subscription cost) actually be useful for people building agentic workflows?
clearloop•5h ago
Barathkanna•4h ago
clearloop•2h ago
and this topic actually inspires me that I can introduce a builtin gas meter for tokens