Recently, I've been studying text embedding models and their various applications and I found it all to be very tedious. I was doing the same workflow over and over again: transforming data into input for the models, generating vectors for multiple models and storing them in a vector DB, and then running similarity search for different queries. I'd evaluate the results, then start tweaking the input, tweaking the different models, and run the whole process all over again. The iteration cycle was slow, so I built a tool which made this a lot easier.
Embedding Explorer is a minimal web app to ingest data, generate embeddings with multiple providers, store vectors, and run fast similarity searches so you can compare model quality side‑by‑side. Everything runs locally in your browser—no backend, no login.
It's broken down into the logical steps I was taking, keeping everything organized and consistent as you iterate:
- Data: upload CSV or point at a SQLite DB. - Templates: build doc bodies with a small mustache‑style syntax (`{{field}}`) and preview IDs/bodies. - Providers: configure multiple models (OpenAI, Google Gemini, Ollama) and run batch jobs across them. - Storage/search: vectors + metadata live in libSQL running in WASM, persisted to OPFS; k‑NN/cosine queries power the comparison UI.
Tech Stack: Dart + Jaspr for UI/workers, libSQL WASM for persistence. No telemetry, everything stored locally in OPFS.
Live demo (no login): https://embeddings.thestartupapi.com