Why this exists: RAG pipelines often suffer from "context rot" — long prompts or noisy retrieval degrading answer quality. CRoM focuses on efficient, relevant injection.
What it does: - Token-aware prompt packing under strict token limits - Cross-encoder fallback for low-confidence answers - FastAPI-based orchestration - Modular & ready for integration with existing RAG stacks
GitHub: https://github.com/Flamehaven/CRoM-Context-Rot-Mitigation--E...
Still early — feedback & suggestions welcome.