Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this.
My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem.
The Tech:
Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required.
Team Memory: Index knowledge so teammates' LLMs get context automatically.
Visual Map: See file dependencies and refactor "blast radius."
MCP-Native: Supports Cursor, Zed, and Claude Desktop.
Linggen saves me hours. I’d love to hear how you manage complex system context!
Repo: https://github.com/linggen/linggen Website: https://linggen.dev
linggen•2h ago
Linggen is a local-first memory layer that gives AI persistent context across repos, docs, and time. It integrates with Cursor / Zed via MCP and keeps everything on-device.
I built this because I kept re-explaining the same context to AI across multiple projects. Happy to answer any questions.
Y_Y•1h ago
linggen•1h ago
When using Claude Desktop, it connects to Linggen via a local MCP server (localhost), so indexing and memory stay on-device. The LLM can query that local context, but Linggen doesn’t push your data to the cloud.
Claude’s web UI doesn’t support local MCP today — if it ever does, it would just be a localhost URL.
ithkuil•8m ago
linggen•2m ago
The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.