We built EnforceAuth to address this. It lets you write policy once and enforce it everywhere enforceauth.com : in microservices, data stores, SaaS platforms and AI agents. Under the hood we use Open Policy Agent (OPA) but add a distributed control plane, runtime evaluation and AI‑aware guardrails:
• Single policy engine: define Rego or YAML policies once and deploy them across your estate; migrate from Styra DAS/Enterprise OPA with no rewrites or downtime enforceauth.com .
• Real‑time decisions: our fabric evaluates each access request at the point of use, preventing stale permissions and configuration drift enforceauth.com .
• AI guardrails: treat AI agents as identities with attributes like trust_level and enforce guardrails based on risk; see the code in the top comment.
• Audit‑ready logging: every decision is signed and logged, turning compliance from a manual audit into an API enforceauth.com .
EnforceAuth runs on‑premises or in the cloud; you can deploy it as a control‑plane‑only service or with sidecar/SDKs depending on latency requirements. We’re releasing a free tier with 10k decisions/month and transparent paid plans for enterprises.
We’re excited to open our GA wait‑list to the HN community. If unified authorization and AI guardrails would make your life easier, join the wait‑list and let us know what you think. I’ll be here all day to answer questions and would love your feedback.
EnforceAuthMark•1h ago
Core components: EnforceAuth uses a distributed control plane that stores policies in Git and compiles them to WebAssembly. Sidecars or SDKs fetch compiled policies via gRPC and cache them locally. Decisions are evaluated in milliseconds and include context (identity, resource, action, environment). If the control plane is unreachable, sidecars keep enforcing the last known policy.
Migration: Existing OPA or Styra DAS policies can be imported directly enforceauth.com . Our migration layer mirrors requests to EnforceAuth while your current system stays in place; when you’re comfortable, flip traffic over and remove the old system. No rewrites required.
AI guardrails example
We model AI agents as identities with roles and attributes. Here’s a simple Rego example showing how we permit admin users or AI agents with a trust level above 2:
default allow = false
# Admins always allowed allow { input.user.role == "admin" }
# Role‑based permissions allow { some perm perm := data.permissions[input.user.role][_] perm.action == input.action perm.resource == input.resource }
# AI agent guardrail allow { input.agent != null input.agent.trust_level > 2 some perm perm := data.permissions[input.agent.role][_] perm.action == input.action perm.resource == input.resource }
Observability & integrations
All decisions are exported to Prometheus/OpenTelemetry. You can send logs to your SIEM or data lake for analytics. Our SDKs are available for Go, Python and Java; Rust and Node are on the roadmap.
Questions for the community
How are you approaching authorization for AI agents? Are you using OPA or home‑grown logic?
Would a gradual migration path help you adopt unified authorization?
What languages/frameworks should we prioritise for SDK support?
Thanks for reading; I’m keen to hear your experiences.