I built Zehrava Gate because I kept having the same problem: AI agents that could read anything worked great, but the moment I let one write — to a CRM, send an email, charge a card — I had no consistent answer to "who authorized that?"
What it is: A self-hosted policy engine + approval queue for agent actions. The agent submits an intent before executing. Gate evaluates a YAML policy (deterministically — no LLM), scores risk, and either auto-approves, holds for human review, or blocks. Every decision is logged with a signed execution token. The agent can't skip it. V2 (SDK-based): gate.propose() → policy decision in ~2ms Human approval queue + dashboard Signed execution tokens with 15-min TTL Agent registry + role-based access Webhook notifications on approve/reject Full audit ledger by intent ID
V3 (proxy-based, all three phases shipped): HTTP forward proxy — one env var, no code changes: HTTP_PROXY=http://gate:4001 TLS intercept for HTTPS destinations Credential vault mode: agent submits intent only, Gate fetches the credential from 1Password/HashiCorp/AWS at execution time, runs the API call, discards. A compromised agent has nothing to exfiltrate. LangChain/LangGraph integration: GateTool and gateNode() drop-ins
Honest scope: This protects against mistakes and enforces policy — not against a fully compromised agent that controls its own runtime. The proxy architecture closes that gap further, but there's no magic bullet for fully adversarial runtimes.
Stack: Node.js, SQLite (better-sqlite3), YAML policies, zero LLM in the eval path.
npm: npm install zehrava-gate GitHub: https://github.com/cgallic/zehrava-gate Live demo: https://zehrava.com/demo