I built this because I kept hitting the same problem: AI tools are powerful but have no memory of my complex multi-repo project. They can't search our internal docs, past incidents, or architecture decisions. Cloud RAG services exist, but they're complex, expensive, and your data leaves your machine. I wanted something I could point at my sources and just run `ctx sync all`.
Quick start:
# Install (pre-built binaries available for macOS/Linux/Windows)
cargo install --git https://github.com/parallax-labs/context-harness.git
# Create config and initialize
ctx init
# Sync your data sources (filesystem, Git, S3, or Lua scripts)
ctx sync all
# Search from CLI
ctx search "how does the auth service validate tokens"
# Or start the MCP server for Cursor/Claude Desktop
ctx serve mcp
What it does differently from other RAG tools:- *Truly local*: SQLite + single binary. No Docker, no Postgres, no cloud. Local embeddings (bundled or pure-Rust) so semantic and hybrid search work with zero API keys. Back up your entire knowledge base with `cp ctx.sqlite ctx.sqlite.bak`.
- *Hybrid search*: FTS5 keyword scoring + cosine vector similarity with configurable blending. Works without embeddings too (keyword-only mode); with local embeddings you get full hybrid search offline.
- *Lua extensibility*: Write custom connectors, tools, and agents in Lua without recompiling anything. The Lua VM has HTTP, JSON, crypto, and filesystem APIs built in.
- *Extension registry*: `ctx registry init` installs a Git-backed community registry with 10 connectors (Jira, Confluence, Slack, Notion, RSS, Stack Overflow, Linear, etc.), 4 MCP tools, and 2 agent personas.
- *MCP protocol*: Cursor, Claude Desktop, Continue.dev, and any MCP-compatible client can connect and search your knowledge base directly.
Embeddings: you can run *fully offline* — the default build uses local embeddings (fastembed with bundled ONNX on most platforms, or a pure-Rust tract path on Linux musl and Intel Mac). No API key required. Optional: Ollama (local LLM stack) or OpenAI if you prefer. Keyword-only mode needs zero deps. There's no built-in auth layer; it's designed for local or trusted network use.
Stack: Rust, SQLite (WAL mode), FTS5, mlua (Lua 5.4), axum, MCP Streamable HTTP. MIT licensed.
GitHub: https://github.com/parallax-labs/context-harness
Docs: https://parallax-labs.github.io/context-harness/
Community Registry: https://github.com/parallax-labs/ctx-registry
If you find it useful, a star on GitHub is always appreciated.
Would love feedback on the search quality tuning (hybrid alpha, candidate counts) and the Lua extension model.
fidorka•19m ago
Btw, I built something similar to solve the context problem for most of my laptop-based activity.
It's slightly more heavyweight (electron app ingesting screenshots) - that being said I took many similar design decisions (local embeddings, sqlite with vector search and FTS hybrid, MCP extension to claude). Feel free to check it out:
https://github.com/deusXmachina-dev/memorylane
__parallaxis•4m ago
I have mostly used the entire API surface so far. Check out the usage in this github action script: https://github.com/parallax-labs/context-harness/blob/main/s...
This is used to build the search index on the website (below)
This tool is made for not only local, but embedded in a ci context
MemoryLane looks really cool — same problem, different surface. Local embeddings + SQLite + hybrid FTS/vector + MCP into Claude is basically the same stack; the screenshot-ingestion and Electron UX are a neat take for “everything I’ve seen on this machine.” I’ll definitely poke around the repo. If you want to see how we’re using custom agents on top of that pipeline, a couple of blog posts go into it: Chat with your blog
- https://parallax-labs.github.io/context-harness/blog/enginee... (persona grounded in your own writing, inline + Lua agents) this is in the same vein. Allowing agents to write into the vector store with an MCP tool is on the road map.
- https://parallax-labs.github.io/context-harness/blog/enginee... Unified context for your engineering team (shared KB, code-reviewer and on-call agents).