AI agents are being deployed to schedule meetings, write code, manage infrastructure, and transact on behalf of users. But there's no standard way to
authorize what an agent can do, audit what it did, or revoke access when something goes wrong. Most teams are hacking together API keys and custom RBAC
— none of it is portable, auditable, or revocable at the agent level.
Grantex is an open delegated authorization protocol designed specifically for AI agents. Think of it as OAuth 2.0 for the agentic era: a principal
(human or organization) grants an agent scoped, time-limited permissions via signed JWT tokens. Every action is auditable. Permissions can be revoked
instantly — including cascade revocation across delegation chains when one agent delegates to another.
What makes it different:
- Standards-track: We submitted an IETF Internet-Draft (draft-mishra-oauth-agent-grants-01) to the OAuth Working Group and filed a public comment with
NIST NCCoE on AI agent authorization.
- Production-ready: 30+ packages across TypeScript, Python, and Go. Integrations for LangChain, OpenAI Agents SDK, Google ADK, CrewAI, Vercel AI, MCP,
and more. 679 tests. Deployed on Google Cloud Run.
- Enterprise features: SOC 2 Type I certified. Security audited by Vestige Security Labs (all findings remediated). Policy engine integration with OPA
and Cedar. Budget controls with atomic debit. Prometheus metrics + OpenTelemetry tracing.
- Self-hostable: Apache 2.0 licensed. Docker Compose for local dev, Helm chart for Kubernetes, Terraform provider for infrastructure-as-code.
Quick start:
npm install @grantex/sdk
pip install grantex
go get github.com/mishrasanjeev/grantex-go
GitHub: https://github.com/mishrasanjeev/grantex
Docs: https://grantex.dev/docs
Playground: https://grantex.dev/playground
Protocol spec: https://github.com/mishrasanjeev/grantex/blob/main/SPEC.md
IETF draft: https://datatracker.ietf.org/doc/draft-mishra-oauth-agent-grants/
Happy to answer questions about the protocol design, the IETF process, or implementation details.