The core idea: AI should live inside the reader, not beside it. Select any passage, hit "Ask AI", and get a response grounded in the entire document. But the feature I'm most proud of: you can summarize your entire AI conversation and attach it directly as a comment on the PDF — so your insights stay with the document, not lost in some chat history.
AI backends: - Ollama — local, fully offline, nothing leaves your machine - Claude, OpenAI, Gemini — cloud via API key - Clean provider abstraction, ~50 lines to add a new backend
Full document context is passed in the system prompt (not just the selected snippet), streaming responses throughout, multiple conversations per document.
Stack: Electron, React, PDF.js, Zustand, TypeScript.
Repo: https://github.com/nikodemseb/lexio
Happy to talk through the architecture or the annotation-to-PDF pipeline.
jlongo78•1h ago