AI hallucinates. How do you keep it from fucking up automations?
4•Gioppix•1h ago
Every time I build simple automations LLMs find a way to screw up something.
At the end of the day I still have to manually review critical actions (emails, sms, invoices...). Why bother automating then? How do you manage it?
I remember studying this in uni lol. How do you use it?
storystarling•52m ago
I found the only way to make this work reliably is to treat the LLM as a fallible component inside a state machine rather than the controller. I've been using LangGraph to enforce structured outputs and run validation checks before any side effects happen. If the output doesn't match the schema or business logic it just retries or halts. It seems like a lot of boilerplate initially but it is necessary if you want to trust the system with actual invoices.
downboots•1h ago
Gioppix•1h ago