Point any SQL at any file. Parquet, CSV, JSON, Avro, Arrow, SQLite, Excel. No server, no import step, no extension to install before you can read a Parquet file. Same embedded model as SQLite and DuckDB, different defaults.
A few things we cared about while building it: It is one binary. Drop slothdb.exe somewhere, run it. It also runs in the browser. The WASM build is 1.3 MB and fits Workers 1 MB script cap in the edge variant.
It is fast enough to be worth the swap for analytical work. On a 5 query warm batch over 10M rows, SlothDB finishes in 138 ms. DuckDB 1.1.5 finishes the same batch on the same hardware in 540 ms.
It is also early and in development stages. v0.2.0. The Python wheel had a packaging bug that was only caught because a stranger filed an issue. So if you hit a rough edge, file one. SlothDB reads every one them.
tee-es-gee•1h ago
> Everything in core, no extensions. HTTP(S), S3 (anonymous public reads), Avro, Excel, Arrow, and SQLite read through the same core binary - no separate install/load step.
That is not so good for an embedded database, though, opens security concerns.