The RAG-based mitigation is interesting, but quite limited, as mentioned. It would only work if the user can provide ground truth data, which for code generation is relatively straightforward, but it's much more difficult for most other factual information. We can't directly rely on data from the web, since the sources need to be carefully reviewed by a human first, which is the labor-intensive task that requires human domain experts.
So this approach seems like a band-aid, and wouldn't be generally applicable. I'm not in the AI industry, but from the perspective of a user it seems that the hallucination problem requires a much more foundational solution.
th0ma5•3h ago