I've been working on a low-level Rust workspace for a while now. Before I knew it, my 'justfile' was over 1k lines and I had 30 shell scripts for testing. My dep graph was WAY too large. I couldn't easily split a single crates, or a few crates to release as OSS repos... I'd have had to use Google's Copybara (Java tooling or their GHA) or a mountain of 'git subtree' and filter scripts.
The Solution:
- Dependency Unification: I use Cargo's resolver output (not syntax parsing) to unify versions, compute MSRV, prune dead dependencies and features. Using 'pin_transitives=true' fully replaces cargo-hakari. The graph is lean across all target-triples w/ a single command.
- Change Detection: The local/CICD 'affected' command is graph-aware. I only check/test/bench what changed, and 'test' is Nextest native.
- Split/Sync: I use a canonical monorepo; then I extract crate/s with full git history into new repos w/ bi-directional sync and 3-way merge conflict resolution.
- Release/Publish: Dependency-order publishing, changelog generation, tagging... but in 11 dependencies instead of hundreds.
Key Decisions:
- 11 core deps/55 resolved deps = minimal attack surface for supply-chain attacks
- Multi-target resolution = 'cargo metadata --filter-platform' per target in parallel (rayon) means dead dependencies/features are actually dead
- System git = your local 'git' binary directly for deterministic SHAs (JJ compatibility is native, obviously).
- Lossless TOML = 'toml_edit' preserves comments and manifest formatting
Tested On:
- tikv, meilisearch, helix, helix-db, tokio, ripgrep, polars, ruff, codex, and more. Forks with cargo-rail configured at github.com/loadingalias.
In my own workspace, change detection alone removed 1k LoC and dropped CI costs ~80% and my builds (especially cold) are quicker/leaner.
LoadingALIAS•12h ago
I wrote a longer post about the motivation and design: https://dev.to/loadingalias/cargo-rail-making-rust-monorepos...
The Problem:
I've been working on a low-level Rust workspace for a while now. Before I knew it, my 'justfile' was over 1k lines and I had 30 shell scripts for testing. My dep graph was WAY too large. I couldn't easily split a single crates, or a few crates to release as OSS repos... I'd have had to use Google's Copybara (Java tooling or their GHA) or a mountain of 'git subtree' and filter scripts.
The Solution:
- Dependency Unification: I use Cargo's resolver output (not syntax parsing) to unify versions, compute MSRV, prune dead dependencies and features. Using 'pin_transitives=true' fully replaces cargo-hakari. The graph is lean across all target-triples w/ a single command. - Change Detection: The local/CICD 'affected' command is graph-aware. I only check/test/bench what changed, and 'test' is Nextest native. - Split/Sync: I use a canonical monorepo; then I extract crate/s with full git history into new repos w/ bi-directional sync and 3-way merge conflict resolution. - Release/Publish: Dependency-order publishing, changelog generation, tagging... but in 11 dependencies instead of hundreds.
Key Decisions:
- 11 core deps/55 resolved deps = minimal attack surface for supply-chain attacks - Multi-target resolution = 'cargo metadata --filter-platform' per target in parallel (rayon) means dead dependencies/features are actually dead - System git = your local 'git' binary directly for deterministic SHAs (JJ compatibility is native, obviously). - Lossless TOML = 'toml_edit' preserves comments and manifest formatting
Tested On:
- tikv, meilisearch, helix, helix-db, tokio, ripgrep, polars, ruff, codex, and more. Forks with cargo-rail configured at github.com/loadingalias.
In my own workspace, change detection alone removed 1k LoC and dropped CI costs ~80% and my builds (especially cold) are quicker/leaner.
Happy to discuss the implementation.