I'd done napkin math beforehand, so I knew it was probably a bug, but still. Turns out it was only partially a bug. The rest was me needing to rethink how I built this thing. Spent the next couple days ripping it apart. Making tweaks, testing with live data, checking results, trying again. What I found was I was sending API requests too often and not optimizing what I was sending and receiving.
Here's what moved the needle, roughly big to small (besides that bug that was costin me a buck a day alone):
- Dropped Claude Sonnet entirely - tested both models on the same data, Haiku actually performed better at a third of the cost
- Started batching everything - hourly calls were a money fire
- Filter before the AI - "lol" and "thanks" are a lot of online chatter. I was paying AI to tell me that's not feedback. That said, I still process agreements like "+1" and "me too."
- Shorter outputs - "H/M/L" instead of "high/medium/low", 40-char title recommendation
- Strip code snippets before processing - just reiterating the issue and bloating the call
End of the week: pennies a day. Same quality.
I'm not building a VC-backed app that can run at a loss for years. I'm unemployed, trying to build something that might also pay rent. The math has to work from day one.
The upside: these savings let me 3x my pricing tier limits and add intermittent quality checks. Headroom I wouldn't have had otherwise.
Happy to answer questions.
arthurcolle•7h ago
ok_orco•7h ago
Most of the cost savings came from not sending stuff to the LLM that didn't need to go there, plus the batch API is half the price of real-time calls.