In December of 2022, I was scrolling Twitter in the wee-hours of the morning holding my then-newborn daughter. ChatGPT had launched, and we were all figuring out what this technology was and how to make it useful. Developers were using retrieval to bring their data to the models - and so I DM’d every person who had tweeted about “embeddings” in the entire month of December. (it was only 120 people!) I saw then how AI was going to need to search to all the world’s information to build useful and reliable applications.
Anton Troynikov and I started Chroma with the beliefs that:
1. AI-based systems were way too difficult to productionize
2. Latent space was incredibly important to improving AI-based systems (no one understood this at the time)
On Valentines Day 2023, we launched first version of Chroma and it immediately took off. Chroma made retrieval just work. Chroma is now a large open-source project with 21k+ stars and 5M monthly downloads, used at companies like Apple, Amazon, Salesforce, and Microsoft.
Today we’re excited to launch Chroma Cloud - our fully-managed offering backed by an Apache 2.0 serverless database called Chroma Distributed. Chroma Distributed is written in Rust and uses object-storage for extreme scalability and reliability. Chroma Cloud is fast and cheap. Leading AI companies such as Factory, Weights & Biases, Propel, and Foam already use Chroma Cloud in production to power their agents. It brings the “it just works” developer experience developers have come to know Chroma for - to the Cloud.
Try it out and let me know what you think!
— Jeff
codekisser•2h ago
jeffchuber•2h ago
dedicated solutions have more advanced search features enable more accurate results. search indexing is resource intensive and can contend for resources with postgres/redis. the cost and speed benefits are naturally more pronounced as data volume scales.
for example - chroma has built in regex+trigram search and copy-on-write forking of indexes. this feature combo is killer for the code-search use case.
philip1209•2h ago
Chroma supports multiple search methods - including vector, full-text, and regex search.
Four quick ways Chroma is different than pgvector: Better indexes, sharding, scaling, and object storage.
Chroma uses SPANN (Scalable Approximate Nearest Neighbor) and SPFresh (a freshness-aware ANN index). These are specialized algorithms not present in pgvector. [1].
The core issue with scaling vector database indexes is that they don't handle `WHERE` clauses efficiently like SQL. In SQL you can ask "select * from posts where organization_id=7" and the b-tree gives good performance. But, with vector databases - as the index size grows, not only does it get slower - it gets less accurate. Combining filtering with large indexes results in poor performance and accuracy.
The solution is to have many small indexes, which Chroma calls "Collections". So, instead of having all user data in one table - you shard across collections, which improves performance and accuracy.
The third issue with using SQL for vectors is that the vectors quickly become a scaling constraint for the database. Writes become slow due to consistency, disk becomes a majority vector indexes, and CPU becomes clogged by re-computing indexes constantly. I've been there and ultimately it hurts overall application performance for end-users. The solution for Chroma Cloud is a distributed system - which allows strong consistency, high-throughput of writes, and low-latency reads.
Finally, Chroma is built on object storage - vectors are stored on AWS S3. This allows cold + warm storage tiers, so that you can have minimal storage costs for cold data. This "scale to zero" property is especially important for multi-tenant applications that need to retain data for inactive users.
[1] https://www.youtube.com/watch?v=1QdwYWd3S1g
codekisser•1h ago