ie. Running it like a normal database, and getting to take advantage of all of its goodies
Where you store the .duckdb file will make a big difference in performance (e.g. S3 vs. Elastic File System).
But I'd take a good look at ducklake as a better multiplayer option. If you store `.parquet` files in blob storage, it will be slower than `.duckdb` on EFS, but if you have largish data, EFS gets expensive.
We[2] use DuckLake in our product and we've found a few ways to mitigate the performance hit. For example, we write all data into ducklake in blog storage, then create analytics tables and store them on faster storage (e.g. GCP Filestore). You can have multiple storage methods in the same DuckLake catalog, so this works nicely.
0 - https://www.definite.app/blog/duck-takes-flight
SqliteMultipleCiphers has been around for ages and is free https://utelle.github.io/SQLite3MultipleCiphers/
And Turso Database supports encryption out of the box: https://docs.turso.tech/tursodb/encryption
kianN•1h ago
PunchyHamster•1h ago
DB encryption is useful if you have multiple things that need separate ACL and encryption keys but if it is one app one DB there is no need for it
letmetweakit•33m ago
beala•19m ago
> This allows for some interesting new deployment models for DuckDB, for example, we could now put an encrypted DuckDB database file on a Content Delivery Network (CDN). A fleet of DuckDB instances could attach to this file read-only using the decryption key. This elegantly allows efficient distribution of private background data in a similar way like encrypted Parquet files, but of course with many more features like multi-table storage. When using DuckDB with encrypted storage, we can also simplify threat modeling when – for example – using DuckDB on cloud providers. While in the past access to DuckDB storage would have been enough to leak data, we can now relax paranoia regarding storage a little, especially since temporary files and WAL are also encrypted.
notorious_pgb•46m ago
Comparing it to a naive approach (encrypting an entire database file in a single shot and loading it all into memory at once) is always going to make competent work seem "amazing".
I say this not to shit on DuckDB (I see no reason to shit on them); rather, I think it's important that we as professionals have realistic standards that we expect _ourselves_ to hit. Work we view as "amazing" is work we allow ourselves not to be able to replicate. But this is not in that category, and therefore, you should hold yourself to the same standard.