I’m the developer of DuoBolt, a local-first duplicate file finder for macOS and Windows (plus a CLI for macOS/Windows/Linux).
Most duplicate cleaners focus on scanning speed, but in my experience the real failure mode is what happens after the scan: unclear selection, risky bulk deletes, and no way to audit or undo a session.
DuoBolt’s core design is a review-first workflow:
- exact duplicate detection (byte-identical) using BLAKE3 hashing
- explicit review before deletion (no “one-click clean”)
- safe deletion via Trash/Recycle Bin
- persistent deletion history with review + restore (session-based)
Under the hood the scan engine is written in Rust and shared between the desktop app and the CLI. It uses a two-stage approach (head+tail prehash, then full BLAKE3) and parallel hashing across CPU cores.
On performance: it’s optimized for real workloads (large libraries, codebases, external drives, NAS over SMB).
I also added an on-disk hash cache to avoid re-hashing unchanged files between scans. The cache is configurable (separate thresholds for local vs network volumes), so you can choose how aggressive it is depending on disk vs NAS workflows.
Pricing is a one-time $29.99 license for the desktop app (lifetime access, 2 seats). The CLI is free.
r9ne•2h ago
I’m the developer of DuoBolt, a local-first duplicate file finder for macOS and Windows (plus a CLI for macOS/Windows/Linux).
Most duplicate cleaners focus on scanning speed, but in my experience the real failure mode is what happens after the scan: unclear selection, risky bulk deletes, and no way to audit or undo a session.
DuoBolt’s core design is a review-first workflow:
- exact duplicate detection (byte-identical) using BLAKE3 hashing - explicit review before deletion (no “one-click clean”) - safe deletion via Trash/Recycle Bin - persistent deletion history with review + restore (session-based)
Under the hood the scan engine is written in Rust and shared between the desktop app and the CLI. It uses a two-stage approach (head+tail prehash, then full BLAKE3) and parallel hashing across CPU cores.
On performance: it’s optimized for real workloads (large libraries, codebases, external drives, NAS over SMB).
I also added an on-disk hash cache to avoid re-hashing unchanged files between scans. The cache is configurable (separate thresholds for local vs network volumes), so you can choose how aggressive it is depending on disk vs NAS workflows.
Pricing is a one-time $29.99 license for the desktop app (lifetime access, 2 seats). The CLI is free.
Website: https://duobolt.app
Benchmarks: https://duobolt.app/benchmarks
Docs: https://duobolt.app/docs
Happy to answer technical questions, especially around hashing, caching, NAS I/O, or the deletion history design.