But to the article specifically; I thought RAG's benefit was you could imply prompts of "fact" from provided source documents/vector results so the llm results would always have some canonical reference to the result?
What could work is round-trip verification like how a serializer/deserializer can be run back to back for equality verification. Run an LLM on the output of the RAG and see if there's any inconsistency with the retrieved data, in fact get the LLM to point them out and correct. [x] Thinking for RAG.
Terr_•7mo ago
That seems like it would smooth the roughest edges of the experience while introducing fewer falsehoods or misdirection.