examples: - 200 small files, 2-3 rclone updates - 10,000 small files, ~1000 rclone updates - 1 mil small files, ~2000 rclone updates - 100 mil small files, ~4000 rclone updates - 500 large, 10k small files, ~2000 rclone updates - 1000 large files, ~1000 rclone updates
details: i implemented similar file chunking algorithms that popular dfs use without needing redis/sql metadata updates during file writes in real time.
instead, your updated file chunks and metadata are computed + synced in the background when reads and writes to your file system are stagnant.
it's live on http://cloudy.so, starting today. this update, improves instance spin-up and spin-down time, as well as significantly reduces storage costs :')