I’m Yannis, co-founder of Preloop. We’ve built a proxy for the Model Context Protocol (MCP) that lets you add human approval gates to your AI agents without changing your agent code.
We’re building agents that use tools (Claude Desktop, Cursor, etc.), but we were terrified to give them write-access to sensitive systems (Stripe, Prod DBs, AWS). We didn't want to rewrite our agents to wrap every tool call in complex "ask_user" logic, especially since we use different agent runtimes.
We built Preloop as a middleware layer. It acts as a standard MCP server proxy.You point your agent to Preloop instead of the raw tool. You define policies (e.g., "Allow payments < $50, but require approval for > $50"). When the agent triggers a rule, we intercept the JSON-RPC request and hold the connection open. You get a push notification (mobile/web/email) to Approve/Deny. Once approved, we forward the request to the actual tool and return the result to the agent.
We put together a short video showing Claude Code trying to send money. It gets paused automatically when it exceeds the limit: https://www.youtube.com/watch?v=yTtXn8WibTY
We’re compatible with any client that supports MCP (Claude Desktop, Cursor, etc.). We also have a built-in automation platform if you want to host the agents yourself, but the proxy works standalone.
We’re looking for feedback on the architecture and the approval flow. Is the "Proxy" approach the right way to handle agent safety, or do you prefer SDKs?
You can try it out here: https://preloop.ai Docs: https://docs.preloop.ai
Thanks!
dim0r•1h ago
We built this because we’re seeing a shift from "chatting with agents" to event-driven flows (agents reacting to webhooks, PRs, or tickets in the background).
The problem we hit was responsibility. An agent can technically execute a stripe.refund tool call, but it cannot weigh the consequences of a $50 refund vs. a $5,000 refund. It lacks the context of risk.
We built the proxy to bridge that gap. It lets the agent run autonomously 99% of the time, but forces a "hardware interrupt" (human check) when the stakes get high. We handle the state management of pausing that headless workflow so you don't have to build custom polling logic into every bot.