I’ve been working on a small project called MemCloud — a distributed, in-memory data store written in Rust. It lets multiple machines on a LAN pool their RAM and act like a shared, ephemeral storage cloud.
Why I built it
I often have multiple devices around me (Mac + Linux laptop + home server) that sit mostly idle. I wanted these machines to behave like one big RAM cache for local development, ML experiments, and data processing — without installing heavy systems or configuring clusters.
So I built a lightweight daemon that:
auto-discovers peers via mDNS
exposes a simple local RPC API
pools memory across devices
supports raw block storage and a Redis-style key-value interface
works offline on macOS and Linux
has a CLI + Rust SDK + JS/TypeScript SDK
What it does
Store a block on any peer and load it from another in <10ms over LAN
Offload large streams (logs, datasets) without spiking local RAM
Build small distributed workflows without running Redis/Memcached clusters
Experiment with P2P memory systems in a simple way
Repo
https://github.com/vibhanshu2001/memcloud
Architecture (short version)
Each device runs a small Rust daemon ("memnode"):
mDNS → discovers peers
Peer Manager → handles connections
Block Manager → stores/loads blocks in local RAM
RPC API → CLI/SDK communication
Optional KV store → set(key, value) / get(key)
SDKs talk only to the local daemon, which routes requests to the correct peer.
Benchmarks (on M1 Mac)
SET: ~25k ops/sec (1KB payloads)
GET: ~16k ops/sec (Not optimized — curious what others get on their machines.)
Looking for feedback on:
architecture & safety
networking design
memory model & eviction strategies
real-world use cases
potential pitfalls I might not be aware of
This is still early-stage/alpha and definitely not production-ready, but I’d love to hear your thoughts or suggestions.
Happy to answer questions!
— Vibhanshu