Sandstore is a hyperconverged distributed file system in Go. Every node runs control plane, data plane, and Raft consensus together. BoltDB metadata, full POSIX semantics, 2PC chunk lifecycle, gRPC, Kubernetes. The problem I kept hitting was simpler than any of that: I wanted to compare this design against a disaggregated one under identical workloads, and there was no clean way to do it without forking three separate codebases.
So topology became a variable. `topology/contract/` is a public Go package with two interfaces: `ControlPlaneOrchestrator` and `DataPlaneOrchestrator`. Implement them and you have a new storage topology. No fork. Same client, same benchmark suite, same Kubernetes deployment. The contract layer was stable before the first topology existed. It wasn't retrofitted.
I benchmarked against a 3-node localhost cluster (all nodes on one machine, real Raft replication but no network latency). Flush-forced 8MB writes: p50 1.1s, p99 1.4s. Reads: p50 1.8ms, p99 363ms. Topology 2 is a GFS-style disaggregated design. The comparison I actually want to run is hyperconverged vs disaggregated under identical workloads on the same hardware. That result doesn't exist anywhere in a single codebase right now.
This isn't production storage today. The longer goal is that you have an idea for a storage topology, you implement the interfaces, and Sandstore handles the benchmarking, deployment, and comparison. No surrounding infrastructure to build from scratch. Website: https://sandstore-eta.vercel.app
If you've run both hyperconverged and disaggregated seriously, where did the tradeoffs actually show up?