Stop Blaming Embeddings, Most RAG Failures Come from Bad Chunking
2•wehadit•57m ago
Everyone keeps arguing about embeddings, vector DBs, and model choice, but in real systems, those aren’t the things breaking retrieval.
Chunking drift is. And almost nobody monitors it.
A tiny formatting change in a PDF or HTML file silently shifts boundaries. Overlaps become inconsistent. Semantic units get split mid-thought. Headings flatten. Cross-format differences explode. By the time retrieval quality drops, people start tweaking the model… while the actual problem happened upstream.
If you diff chunk boundaries across versions or track chunk-size variance, the drift is obvious. But most teams don’t even version their chunking logic, let alone validate segmentation or check adjacency similarity.
The Industry treats chunking like a trivial preprocessing step. It’s not.
It’s the single biggest source of retrieval collapse, and it’s usually invisible.
Before playing with new embeddings, fix your segmentation pipeline. Chunking is repetitive, undifferentiated engineering, but if you don’t stabilize it, the rest of your RAG stack is built on sand.
Comments
billconan•46m ago
how to do chunking? I recently tried llamaindex and some other opensource solutions. the result was poor, some words or sentences were split in the middle.
popidge•27m ago
Chunking strategy is really difficult and, like you say, so important to RAG. I'm currently battling with it in a "Podcast archive -> active social trend" clip-finder app I'm working on. You have to really understand your source material and how it's formatted, consider preprocessing, consider when and where semantic breaks happen and how you can deterministically handle that in the specific domain.
Adjacency similarity is a must, otherwise you leave perfectly cromulent results on the table because they didn't have the right cosine score in a vacuum.
There is some early stuff from Apple's research labs and the ColBERT team in late attention embedding (https://arxiv.org/abs/2112.01488) which looks to ease that burden, and generate compressed token-level embeddings across a document.
billconan•46m ago