1. The LLM itself selects the most relevant documents — no vector database needed.
2. The selected documents are then placed directly into the context for generation.
This kind of in-context retrieval approach greatly improves retrieval accuracy compared to traditional vector-based retrieval methods.
mingtianzhang•4h ago
1. The LLM itself selects the most relevant documents — no vector database needed.
2. The selected documents are then placed directly into the context for generation.
This kind of in-context retrieval approach greatly improves retrieval accuracy compared to traditional vector-based retrieval methods.