I’ve built Ragmate – a local Retrieval-Augmented Generation (RAG) server that integrates with JetBrains IDEs via the built-in AI Assistant.
It scans your codebase, builds a local index, and serves relevant context to the LLM of your choice (e.g., OpenAI, Ollama). This means smarter code completions and answers that understand your actual project.
Key features: - Works locally via Docker (self-hosted) - Uses .gitignore and .aiignore to avoid noise - Framework-aware responses (Django, React, etc.) - Language support for Python, JS, PHP, Java, and more
Currently JetBrains is supported — VS Code support is coming soon. Future plans also include Gemini, Claude, Mistral, and DeepSeek support.
It’s free, open-source, and built for developers who want privacy + context-aware generation.
Would love your feedback!