I’ve generated embedding for “objects” or whole documents to get similarity scores. Helps with “relevant articles” type features.
I’ve also made embeddings for paragraphs or fixed sized chunks for RAG lookups. Good for semantic search.
I don’t understand why you would want embeddings on sentences.
> Chunking Strategies
> Sentence-level chunking works well for most use cases, especially when the document structure is unclear or inconsistent.
xfalcox•5mo ago
I do recommend using https://github.com/huggingface/text-embeddings-inference for fast inference.
ipsum2•5mo ago
xfalcox•5mo ago
My use case is basically a recommendation engine, where retrieve a list of similar forum topics based on the current read one. As with dynamic user generated content, it can vary from 10 to 100k tokens. Ideally I would generate embeddings from an LLM generated summary, but that would increase inference costs considerably at the scale I'm applying it.
Having a larger possible context out of the box just made a simple swap of embeddeding models increase quality of recommendations greatly.