AI hallucinate. Do you ever double check the output?
6•jackota•1h ago
Been building AI workflows and then randomly hallucinate and do something stupid so I end up manually checking everything anyway to approve the AI generated content (messages, emails, invoices,ecc.), which defeats the whole point.
Anyone else? How did you manage it?
Comments
AlexeyBrin•1h ago
You can't 100% be sure the AI won't hallucinate. If you don't want to manually check it, you can have a different AI check it and if it finds something suspect flag it for a human to verify it. Even better have 2 different AIs check the output and if they don't agree flag it.
Gioppix•1h ago
I also don't trust LLMs, but I still find automations useful. Even with human-in-the-loop they save a bunch of time. Clicking "Approve & Send" is much quicker than manually writing out the email, and I just rewrite the 5% that contains hallucinations.
Zigurd•1h ago
You have put your finger on why agent assisted coding often doesn't suck, and other use cases of LLMs often do suck. Lint and the compiler get there licks in before you even smoke test the code. There aren't two layers of deterministic, algorithmic checking for your emails or invoices.
So before anyone concludes that coding agents prove that AI can be useful, find some use cases with similar characteristics.
7777777phil•1h ago
I have been building Research automation with LangGraph for the past 2 months. We always put a human in the loop checkpoint after each critical step, might be annoying now but I think it will save us long-term.
codingdave•51m ago
Ever? More like always. Keeping humans in the loop is the current best practice. If you truly need to automate something that cannot afford a human checkpoint, find a deterministic solution for it, not LLMs.
varshith17•14m ago
Build validation layers, not trust. For structured outputs (invoices, emails), use JSON schemas + fact-checking prompts where a second AI call verifies critical fields against source data before you see it.
Real pattern: AI generates → automated validation catches type/format errors → second LLM does adversarial review ("check for hallucinated numbers/dates") → you review only flagged items + random samples. Turns "check everything" into "check exceptions," cuts review time 80%.
exabrial•7m ago
The new guys on my team do not check it. They already had problems checking their work, AI is just amplifying the actual human problem.
AlexeyBrin•1h ago