The problem: every update (even changing a single texture) transferred the entire build over S3. A 2.2 GB game with minor changes meant 2.2 GB up and 2.2 GB down, every time, for every user. I looked for an existing solution and found nothing that fit:
- rsync needs SSH on both ends, doesn't compose with arbitrary cloud storage - bsdiff/xdelta operate on single files, not directories
So I built rac-delta: an open, storage-agnostic differential sync protocol with SDKs in Rust and Node.
- How it works -
The protocol splits files into fixed-size chunks (default 1 MB), hashes each with Blake3, and produces a manifest file "rd-index.json" that describes the full directory, every file, every chunk, every hash.
To sync:
1. Generate a local rd-index.json by scanning the directory
2. Fetch the remote rd-index.json from your storage backend if there is one
3. Compare them to produce a DeltaPlan:
DeltaPlan {
newAndModifiedFiles: FileEntry[]
deletedFiles: string[]
missingChunks: ChunkEntry[]
obsoleteChunks: ChunkEntry[]
}
4. Transfer only the missingChunks5. Clean up obsolete chunks
6. Push the updated rd-index.json
Chunks are deduplicated across files, if two files share identical regions, that chunk is stored and transferred once.
Blake3 is notably faster than SHA-256 for large directory scans, which matters when you're hashing multi-gigabyte directories on every sync.
- Storage-agnostic by design -
rac-delta has no opinion about where chunks live. The SDKs ship adapters for S3, Azure Blob, GCS, SSH, HTTP, signed URLs (this one is experimental), and local filesystem.
- Benchmark results (real S3 infrastructure, 2.2 GB directory) -
Download transfer -> rac-delta (116MB) -> raw S3 (2219MB) -> Reduction of 94.7%
Upload transfer -> rac-delta (115MB) -> raw S3 (2210MB) -> Reduction of 94.8%
Download time -> rac-delta (35.5s) -> raw S3 (671.2s)
Upload time -> rac-delta (53.3s) -> raw S3 (268.9s)
Egress cost / 1000 users (aprox.) -> rac-delta (9.66€) -> raw S3 (184.27€) -> 19x cheaper
And the base upload (first-time, no remote index) takes 172.6s with rac-delta vs 209.4s raw, slightly faster even on full uploads because of concurrent chunk streaming.
(The initial tests were run on a single machine with slow internet against eu-central-1, and production tests on Raccreative Games)
- Three download strategies -
Different environments need different tradeoffs:
- Memory-first: all chunks into RAM, then reconstruct. Best for small builds, fast networks
- Disk-first: chunks to a temp directory, then reconstruct. Better for low-memory devices
- Streaming (recommended): reconstruct files as chunks arrive. No extra RAM or disk overhead
- Production usage -
Raccreative Games uses rac-delta in production today. The CLI tool Clawdrop (Rust) handles uploads; the desktop launcher (Electron/Node) handles downloads. Both use the respective SDKs directly.
https://github.com/raccreative/clawdrop
- Open protocol, MIT licensed -
rac-delta is a documented open protocol - anyone can implement it in any language. The Rust and Node SDKs are the reference implementations.
Docs: https://raccreative.github.io/rac-delta-docs/
Benchmarks + ROI calculator: https://racdelta.com/en/
Node SDK: https://github.com/raccreative/rac-delta-js
Rust SDK: https://github.com/raccreative/rac-delta-rs
Looking for feedback from teams distributing large binaries - desktop app installers, ML model weights, firmware OTA updates, simulation assets, or anything where "upload the whole thing again" is your current answer. Happy to run benchmarks against your specific file patterns.