But it stays in OpenClaw.
I built Sekha for when you need memory that travels: OpenClaw today, Claude Code tomorrow, Kimi 2.5 or Gemini the next day. Intelligent embedding-based retrieval, persistent storage, universal API.
The difference: OpenClaw: MEMORY.md files, internal only
Sekha: SQLite + Chroma embeddings, REST/MCP/SDKs, any LLM via LiteLLM/OpenRouter
Use case: OpenClaw explores a codebase, stores findings in Sekha via MCP. Next day, Claude Code reads the same context via SDK. Your analytics pipeline queries it via REST. Same memory, any tool, any model.
Others add memory to OpenClaw. Sekha frees your memory from OpenClaw.
Stack: Rust (fast), SQLite (durable), Chroma (search), LLM-Bridge for universal routing. AGPL, self-hosted.
GitHub: https://github.com/sekha-ai/sekha-controller | Site: https://sekha.dev
The question: What would you build if your AI memory worked with every tool, not just one?
sekha-ai•1h ago
Sekha solves the part OpenClaw doesn't: making that memory portable, auditable, and enterprise-ready. SQLite (postgres option coming soon) instead of Markdown. APIs instead of file boundaries. Same embedding-based retrieval, but universal.
Stack: Rust (fast path), Python (bridge), Chroma (search), MCP/REST/SDKs (universal). AGPL, self-hosted, actually free.
Happy to answer any questions.