If you use Claude Code, Cursor, or ChatGPT — you already know the cold start problem. Every conversation starts from zero. Even with MCP
connectors, you're pulling from specific tools (Slack, Linear, GitHub) but none of them know what you were actually doing five minutes ago. You
still end up typing "I'm working on the payments refactor, I just changed three files, Sarah approved the pricing in Slack, and the ticket is
PROD-847."
lurk eliminates that. It's a local macOS daemon that continuously observes your desktop — window titles, screen content via OCR, git diffs,
calendar, input state — and serves a unified context to any AI tool. Not connector-by-connector. Everything at once, automatically.
The difference from connectors: Slack MCP gives you access to messages, but which ones? You have 47 channels. Google Drive gives you docs, but
which doc matters right now? lurk knows because it watched you read the pricing doc, switch to Slack to discuss it with Sarah, then open VS Code to
implement it. It connects the dots that individual connectors can't.
How it works:
- Swift daemon polls every ~3s (window titles, screenshots, input state)
- Python engine enriches events — 30+ app-specific parsers, activity classification, OCR via macOS Vision
- Reads your project's README on startup so it knows what the project actually is
- Git watcher captures real diffs — the actual code changes, not just file names
- MCP server for Claude Code/Cursor, HTTP API on localhost:4141 for everything else
- Chrome extension for one-click context injection into Claude, ChatGPT, Gemini
- Optional local LLM (Ollama) clusters activity into coherent work threads
What it doesn't do:
- No cloud. Everything stays on your machine.
- No telemetry. No accounts. No API keys required.
- Doesn't send your screen content anywhere — the consuming AI tool reads it locally.
npx lurk-cli onboard
github:
https://github.com/lurk-cli/lurk