Default deny (nothing runs without explicit permission) Time-boxed permissions (grant access for minutes, not forever) Full audit logs (know exactly what happened and why)
Think of it as: the model decides, but Reg.Run approves or blocks before side effects happen. Auth0 for AI Agents if you wish. Why I'm here: I'm pre-cofounder, running design partner discovery right now. I have a website running, an MVP, and I'm finishing what I'm calling the APAA - Authorization Protocol for AI Agents - open to everyone on Github. I know I'm not the typical founder here. I can't write elegant code. But I've lived through what happens when systems act with implicit authority, and I believe we need this infrastructure before we scale agents everywhere. Sort of - if you wear a seatbelt and your brakes work, you probably go a little faster right? What I've built: https://reg-run.com/ https://regrunmvp.replit.app/ Please be kind, but be honest. What am I missing? What would you build differently? Is this even the right problem to solve? Looking for design partners who are already deploying agents in production and want to protect themselves. Thanks for reading, Sara
dheavy•1h ago
There's a real gap identified (execution permission instead of output guardrails). The timing concern is valid (we're scaling agent framework way faster than security infrastructure — see Clawdbot-Moltbot). The default-deny + time-boxed permissions + audit logs is a solid model, easy to discuss at high-level with security teams in an org. The "Auth0 for AI Agents" framing is clear and positions it well.
Actually, the audit log piece is really huge. Having a complete execution trace with authorization decisions is invaluable for incident response. That alone might justify adoption even if the blocking mechanism is imperfect.
My concerns and questions:
- Where exactly does this sit? If it's between the agent and tool calls, that's relatively straightforward. If it needs to intercept arbitrary code execution or API calls, that's significantly harder.
- Adding another authorization layer means more setup, more policy configuration, more potential points of failure. Adoption challenge.
- Who defines what's "allowed"? In what format? How granular? Actually expressing "this agent can do X in context Y at time Z" in a way that's both powerful and usable, that's the whole ballgame (IMHO). I have in mind how complex AWS IAM policies got, and those are for relatively static systems. AI agents are dynamic, context-dependent, and probabilistic.
- By the time Reg sees a request to execute, the LLM has already decided. What happens when you block it? Does the agent gracefully handle denials and retry with different approaches?
I'd be interested in seeing real-world policy examples from your design partners. That'll tell you whether you've found the right abstraction layer.
Congratulations for just framing the idea and getting this far. Being very concerned about the current free-wheeling AI expansion with minimal security, I strongly believe this is going in the right direction and would like to know where this leads.