I’ve been uneasy watching AI systems get wired directly into production systems without any real authority layer.
So I built a small gateway that sits between an application, an LLM, and real-world actions.
It enforces things most infrastructure systems take for granted: - environment controls (dev vs prod) - kill switch - policy allowlists - cost ceilings - human approvals - idempotency - append-only audit logs
The LLM can suggest actions, but nothing executes unless it passes policy.
This is an early prototype. Executors are stubbed, approvals and idempotency are in-memory. The goal is not completeness, but exploring the right control abstraction.
I’m sharing this because I think AI needs a control plane, not just better prompts.
Curious how others here are thinking about approvals, idempotency, and execution safety in AI systems.