I originally built this over Christmas break as a "self-serve business intelligence" tool to stop fielding repetitive coworker questions ("write me a query for X," "where's the logic for Y?") but now it's morphed into a dev tool as well. I couldn't find anything that centralized these tools and served them over chat in one place.
I've been using it daily (via MCP) for work and it's been such a huge development accelerator for me. I want to share what I've built with the community, get feedback, or if you wish, contributions. Happy to answer questions about the architecture or use cases.
(You can stop here, unless you want more technical details...)
Tools: The agent gets access to configurable tools you define: SSH connections to servers (run commands, check logs, restart services), database queries via SSH tunnels (PostgreSQL, MySQL, MSSQL) with parameterized queries to prevent injection, and vector search over your indexed content. Each tool is defined in a config with connection details, and you can enable/disable them per use case. The database tools return structured results the LLM can reason about. SSH tools stream output for long-running commands. There's also a Python REPL tool for data manipulation when the LLM needs to transform query results.
On the RAG/indexing side: Chunking uses Chonkie's CodeChunker with Magika (Google's ML model) for automatic language detection, then tree-sitter for AST-aware splitting that respects semantic boundaries (functions, classes, blocks). Each code chunk gets a header with file path and import context so the LLM knows where it came from. Retrieval uses MMR (Maximal Marginal Relevance) to reduce near-duplicate results, balancing relevance with diversity via a configurable lambda parameter. FAISS indexes are portable and can be exported.