Just migrated all embeddings to this same model a few weeks ago in my company, and it's a game changer. Having 32k context is a 64x increase when compared with our previous used model. Plus being natively multilingual and producing very standard 1024 long arrays made it a seamless transition even with millions of embeddings across thousands of databases.
xfalcox•2h ago
I do recommend using https://github.com/huggingface/text-embeddings-inference for fast inference.
ipsum2•42m ago