I got tired of trusting cloud services with my AI conversations — so I built the setup I actually wanted: encrypted disk, hardened OS, one-command deploy.
Demo: https://github.com/congzhangzh/your_openclaw/raw/main/demos/...
The idea is simple: if you're going to self-host AI, do it right — from the bare metal up.
Layer 1 — The disk: LUKS encryption + Btrfs compression (or ZFS native encryption). AI logs, API keys, model configs — everything at rest is encrypted. Someone pulls your disk? They get nothing.
Layer 2 — The OS: Debian Trixie. Stable, predictable, full toolchain. No surprise updates breaking your gateway at 3 AM.
Layer 3 — The container: Docker with Tini as PID 1 (proper signals, no zombies). Data lives on the host as plain files (~/.openclaw) — ls, cp, rsync. No opaque volumes.
Layer 4 — The gateway: OpenClaw with token auth + device approval. Connect Telegram and more. Guided onboard walks you through everything.
The whole setup:
git clone https://github.com/congzhangzh/your_openclaw.git && cd your_openclaw
./shell
That's it. `openclaw onboard` inside the container does the rest.Built-in monitoring (btop, nload, iftop) in the container. Ctrl+P, Ctrl+Q to detach — gateway runs 24/7.
Repo includes VPS disk encryption guides and provider recommendations. MIT-licensed. I use this daily on a cheap European VPS.
Feedback welcome: - Is the layered security overkill or just right? - Are you encrypting your VPS disks? - What AI backends are you running?