So you have to be able to identify a priori what is and isn't an hallucination right?
makeavish•1h ago
Yeah, reading the headline got me excited too.
I thought they are going to propose some novel solution or use the recent research by OpenAI on reward function optimization.
ares623•1h ago
The oracle problem is solved. Just use an actual oracle.
happyPersonR•48m ago
I guess the real question is how often do you see the same class of hallucination ? For something where you're using an LLM agent/Workflow, and you're running it repeatedly, I could totally see this being worthwhile.
tines•1h ago
makeavish•1h ago
ares623•1h ago
happyPersonR•48m ago