A few months ago I gave it an AI brain. h-cli lets you manage infrastructure by sending plain English messages in Telegram. Claude Code by default, also works with self-hosted models through the Claude Code framework via API calls to vLLM/Ollama.
What it can do:
- "Discover the CLOS fabric and document everything in NetBox with cable detail" — 12 routers, full cabling, 4 minutes (GIF on the repo) - Parallel REST calls across APIs in a single job — correlated results in seconds ; copied from h-ssh - EVE-NG lab automation — natural language to full lab deployment, bootstrap, and verification - Grafana dashboard rendering straight into Telegram - Teachable skills — demonstrate a workflow, it learns it - Chunk-based conversation memory (24h) + Qdrant vector memory for your own datasets (I used EVPN docs, worked perfectly for creating templates and troubleshooting) - Redis-based horizontal scaling, designed with future plans to run multiple instances against a shared vLLM backend
Safety: a separate stateless LLM (Haiku, also adjustable for local LLMs) gates every command with zero conversation context — can't be social-engineered. Pattern denylist, two isolated Docker networks, non-root, cap_drop ALL, HMAC-signed results. 44 hardening items total.
Self-hosted, Docker Compose, 9 containers, MIT licensed.
The interesting part might be how it was built: one operator coordinating 8 parallel AI agent teams, zero human developers. The development methodology doc covers the full process, architecture, coordination via git + Redis, conflict resolution between agents. Of course i reviewed the code changes, hence the commit discipline.