That is not control if the action is irreversible.
A scanner can tell you what happened. A boundary can decide whether it may happen.
The core problem I keep running into is this: once agents can call tools, trigger workflows, move data, spend money, or change state, “observe after the fact” stops being enough.
What seems missing is a practical pre-execution decision layer: an external allow/deny boundary between intent and execution.
Questions I’m interested in:
* How are people handling this today for agentic workflows in production? * Are monitoring + approvals actually enough once execution becomes fast and autonomous? * Where do existing policy engines break down for AI-driven actions? * What would a real pre-execution control layer need to verify before allowing action?
I’m less interested in theory here and more in what people have actually seen fail in production.