I wanted to give every tool I had access to LLM wiki and didn’t want to pay Mem0 or congee $100 a month for saving text files.
Every AI I use hits the same wall. The conversation ends and everything disappears. Context, files, databases, working state. Next session I’m re-explaining what we built yesterday.
I fixed this by giving every AI tool access to a persistent workspace on a Linux box. Claude, Claude Code, any MCP-compatible tool. They all share the same filesystem, the same database, the same knowledge base. One of them writes an architecture doc, the others can read it.
The whole thing cost an afternoon to build and $10/month to run.
I realized Pi has everything I needed. If you just rip the LLM out of Pi you can teleport any AI in the world into the harness. Pi, Mario Zechner’s open source harness publishes its execution layer as an npm package. File reads, writes, grep, find, directory listing. All exposed as factory functions. I imported Pi’s tools, registered them as MCP tools, and added a bash shell:
import { createReadToolDefinition, createWriteToolDefinition, createGrepToolDefinition, createFindToolDefinition, createLsToolDefinition, } from "@earendil-works/pi-coding-agent";
Five imports. One extra tool for shell access. The bridge is a single TypeScript file with five dependencies: Pi’s package, the MCP SDK, Express, an OTP library, and Zod.
A Cloudflare Worker hosts the MCP endpoint. A Cloudflare Tunnel connects it to a cheap VPS. No inbound ports, no public IP. Free tier for both. The AI connects with three strings: URL, OAuth client ID, OAuth secret.
Auth Clerk OAuth at the MCP connection. A shared-secret origin proxy so only my Worker reaches the bridge. A TOTP gateway that locks every tool until I enter an authenticator code. To pass the TOTP I just tell Claude the code on my authenticator app and he calls the TOTP tool in the MCP. All of this runs on a disposable VPS. But no reason you can’t use this pattern to give Claude Code SSH access to dozens of machines. Every tool call is logged with SHA256 hashes. Every file write creates a backup.
I talk to Claude on my phone. We design a system architecture. Claude writes the doc to its persistent filesystem as markdown with YAML frontmatter. I open Obsidian on my laptop and the notes are there with graph view and backlinks. I open Claude Code and it can grep for the same docs Claude wrote. If Claude needs another tool, it just installs it on the box. Claude installed PostgreSQL in userspace on the box. It runs SQL queries alongside the markdown layer. Next conversation, any device, any tool, it greps for prior context and picks up where we left off. Ideas compound instead of evaporating. Why one box, many tools Every AI tool connects to the same endpoint. The knowledge base is shared. Claude writes something, Claude Code can read it. The box is every tool’s long-term memory. Each tool that touches it makes the shared context richer. One Clerk OAuth app, one TOTP code, one workspace. Any MCP-compatible tool connects with three strings.
Any coding agent that publishes its execution layer as importable modules works for this. Pi happens to be modular enough that the surgery is trivial. Credit to Mario and the Pi team for building tools clean enough that someone could repurpose their agent’s internals in an afternoon. Stop building MCP servers from scratch. A coding agent already built the hard part.
Cybersecurity consultant and identity architect. I build AI agent infrastructure because the tools I want don’t exist yet.