If that still doesn't work, please choose a Gemini Model (Gemini 2.5/2.0 Flash) and also check if you have set Gemini API key under API Key section in Settings page. PDF extract requires an Gemini API Key.
For now, PDF support only works for Gemini and Anthropic Models. I will try to support more models soon in future.
vinhnx•16h ago
App: https://vtchat.io.vn
Open Source: https://github.com/vinhnx/vtchat
The key difference is true local-first architecture. Everything lives in your browser - chats stored in IndexedDB, zero server storage, and your API keys never leave your device. I literally can't see your data even as the developer.
Switch between 15+ AI providers (OpenAI, Anthropic, Google, etc.) and compare responses side-by-side. Safe for shared machines with complete data isolation between users. Local models with LM Studio, LLama.ccp and Ollama is in the road map.
Research features that I actually use daily: Deep Research does multi-step research with source verification, Pro Search integrates real-time web search, and AI Memory builds a personal knowledge base from your conversations. Also includes PDF processing, "thinking mode" to watch AI reasoning unfold, structured data extraction, and semantic routing that automatically activates the right tools.
Built with Next.js 14, TypeScript, and Turborepo. Fully open source for self-hosting. The hosted version is mostly free, with optional VT+ for premium models and advanced research capabilities.
Would love feedback on the local-first approach or any questions about the implementation.