But this is where things break down.
Most modern apps don’t have fine-grained permissions.
Concrete example: Vercel. If I want an agent to read logs or inspect env vars, I have to give it a token that also allows it to modify or delete things. There’s no clean read-only or capability-scoped access.
And this isn’t just Vercel. I see the same pattern across cloud dashboards, CI/CD systems, and SaaS APIs that were designed around trusted humans, not autonomous agents.
So the real question:
How are people actually restricting AI agents in production today?
Are you building proxy layers that enforce policy? Wrapping APIs with allowlists? Or just accepting the risk?
It feels like we’re trying to connect autonomous systems to infrastructure that was never designed for them.
Curious how others are handling this in real setups, not theory.
verdverm•1h ago
It's more about specific apps than modern apps and how your org puts their infra together.
I don't have your problem, I can give my agents all sorts of environments with a spectrum of access vs restrictions
NBenkovich•1h ago
The problem is higher-level platforms and SaaS. Once agents need feedback from deployment, CI, logs, or config tools, permissions often collapse into “full token or nothing”. Vercel is just one example.
That’s the gap I’m pointing at.
verdverm•13m ago
I don't have problems with permissions in any of those things you listed. Do mainly k8s based infra