That’s why we are launching Marple DB today (https://www.marpledata.com/marple-db).
Marple DB transforms measurements files (CSV, MAT, HDF5, TDMS, …) into a queryable lakehouse. We designed for extreme ingestion performance. A typical customer example: one MDF file can have ~60k channels, going up to 1kHz, for 1 hour. That is a total of 100B datapoints for a single file.
To make this work, we are using a combination of Parquet files on Apache Iceberg + PostgreSQL. Parquet gives us the scalability we need, and Postgres acts as an extremely fast visualisation cache. We provide Python and MATLAB SDKs to talk to these storages in a unified way. Marple DB is available as a paid product only, but we do offer self-managed hosting.
Me and my co-founder MBaert will join the discussion below.
We are happy to hear your questions!