Hey HN! I built AgentGate, an open-source security firewall for AI agents built on OpenClaw.
The problem: AI agents (browser, bash, fetch) can call any tool without restriction. There's no built-in way to define policies like "agents can read files but not delete them" or "require human approval before sending emails."
AgentGate solves this with a policy evaluation layer that wraps your OpenClaw tool calls and enforces ALLOW/DENY/REQUIRE_APPROVAL rules in real time.
How it works:
1. You define policies in Firebase Firestore (regex-matched against tool name + args)
2. Every tool call hits the AgentGate middleware before execution
3. ALLOW → passes through, DENY → blocked immediately, REQUIRE_APPROVAL → Telegram webhook fires, human approves/rejects in real time
4. Full audit log stored in Firestore with timestamps, agent ID, tool called, decision, and approver
Stack: Next.js dashboard, Firebase Firestore + Auth, OpenClaw bash/browser/fetch agents, Telegram Bot API for approval notifications.
What's working in v1.0.0:
- Policy engine (ALLOW/DENY/REQUIRE_APPROVAL)
- Real-time audit log dashboard
- AI Policy Wizard (describe a rule in English → generates the regex policy)
- Telegram approval flow
- Tested against OpenClaw bash, browser, and fetch tool types
Would love feedback on the policy model, the approval UX, and whether this is useful for teams running autonomous agents in production.
PEGHIN•2h ago
The problem: AI agents (browser, bash, fetch) can call any tool without restriction. There's no built-in way to define policies like "agents can read files but not delete them" or "require human approval before sending emails."
AgentGate solves this with a policy evaluation layer that wraps your OpenClaw tool calls and enforces ALLOW/DENY/REQUIRE_APPROVAL rules in real time.
How it works: 1. You define policies in Firebase Firestore (regex-matched against tool name + args) 2. Every tool call hits the AgentGate middleware before execution 3. ALLOW → passes through, DENY → blocked immediately, REQUIRE_APPROVAL → Telegram webhook fires, human approves/rejects in real time 4. Full audit log stored in Firestore with timestamps, agent ID, tool called, decision, and approver
Stack: Next.js dashboard, Firebase Firestore + Auth, OpenClaw bash/browser/fetch agents, Telegram Bot API for approval notifications.
What's working in v1.0.0: - Policy engine (ALLOW/DENY/REQUIRE_APPROVAL) - Real-time audit log dashboard - AI Policy Wizard (describe a rule in English → generates the regex policy) - Telegram approval flow - Tested against OpenClaw bash, browser, and fetch tool types
Would love feedback on the policy model, the approval UX, and whether this is useful for teams running autonomous agents in production.
GitHub + docs: https://https://agent-gate-rho.vercel.app/