Projects like Claude Code showed a simpler path: In-Context Retrieval — letting the LLM reason directly over context for retrieval instead of outsourcing search to external infrastructure.
PageIndex takes that one step further with In-Context Indexing.
If retrieval happens in-context, the index should live there too.
Each document is transformed into a hierarchical, human-readable tree structure (like a table-of-contents tree index) inside the model's context window.
The LLM reads the structure, identifies relevant branches, opens them, and reasons through for retrieval — no embeddings, no chunking, no opaque vector indexes the model can't interpret.
Retrieval and indexing, both inside the model.