https://github.com/redis/redis/blob/unstable/modules/vector-...
Still, to be honest, I'm reading the 500 lines of code with great interest because I didn't thought it was possible to go so small, maybe part of the trick is that's in C++ and not in C, as for instance you don't have the queue code.
Also the strategy used by my implementation in order to re-link the orphaned nodes upon deletion adds complexity, too. (btw any feedback on that part is especially appreciated)
EDIT: Ok after carefully inspection this makes more sense :D
1. Yes, C++ helps, the vector class, the priority queue, ...
2. I forgot to say that I implemented serialization other than quantization, and this also includes quite some code.
3. Support for threads is another complexity / code-size price to pay.
And so forth. Ok, now it makes a lot of sense. Probably the 500 LOC implementation is a better first-exposure experience for newcomers. After accumulating all the "but how to..." questions, maybe my C implementation is a useful read.
I put together a tiny little implementation a while ago, the key thing being, it writes the index as a few parquet files, so you can host the index on a CDN and read from it via http range requests (e.g. via duckdb wasm).
Definitely isn't beating any benchmarks, but free (or wildly cheap) to host, as you serve it directly from a CDN and processing is done locally.
When I first built it, I spent some time trying to tackle the issue of needing to update the entire file (and create an invalidation) if you want to update the database, which might be fine, but closes a lot of doors. I kind of hit a wall on finding a convincing approach to solving it, given the constraints of the setup.
oersted•1w ago
> HNSW is a graph structure that consists of levels that are more sparsely populated at the top and more densely populated at the bottom. Nodes within a layer have connections to other nodes that are near them on the same level. When a node is inserted a random level is picked and the node is inserted there. It is also inserted into all levels beneath that level down to 0.
> When searches arrive they start at the top and search that level (following connections) until they find the closest node in the top level. The search then descends and keeps searching nearby nodes. As the search progresses the code keeps track of the K nearest nodes it has seen. Eventually it either finds the value OR it finds the closest value on level 0 and the K nearest nodes seen are returned.