I built a formal specification and reference implementation for
governing autonomous agent actions before they reach execution.
The core idea: every agent action must pass a simultaneous check —
identity → capability scope → delegation chain → risk policy →
execution token. If any condition fails, the action is denied.
No exceptions, no fallbacks.
What's in the repo:
- Spec v1.11: 36 documents across 4 conformance levels (L1-L4)
- Go reference implementation: 22 packages
- 42 signed conformance test vectors (real Ed25519 + SHA-256)
- Python SDK: @acp_tool for LangChain, RunContext for Pydantic AI,
ACPToolDispatcher for MCP
- Docker image
The non-escalation requirement in delegation chains is the part I
found most interesting to formalize — a delegatee cannot grant
capabilities they don't hold, verified cryptographically at each hop.
I built ACP to solve a specific problem: autonomous agents operating
on real systems have no standardized way to verify authorization before
acting. OAuth handles user identity. RBAC handles permissions. But neither
was designed for agent-to-agent delegation chains where each hop needs
cryptographic verification.
The core: every action must satisfy simultaneously —
identity → capability scope → delegation chain → risk policy → execution token
The part I found most interesting to formalize: non-escalation in
delegation chains. A sub-agent cannot grant capabilities it doesn't hold,
verified at each hop with Ed25519 signatures. This makes privilege
escalation attacks detectable at the protocol level.
What's published:
- Spec v1.11: 36 documents, 4 conformance levels
- Go reference impl: 22 packages, 42 signed test vectors
- Python SDK for LangChain, Pydantic AI, MCP
- Paper: https://doi.org/10.5281/zenodo.19072332
Happy to discuss the threat model or the delegation chain formalization.
chelof100•1h ago
The core idea: every agent action must pass a simultaneous check — identity → capability scope → delegation chain → risk policy → execution token. If any condition fails, the action is denied. No exceptions, no fallbacks.
What's in the repo: - Spec v1.11: 36 documents across 4 conformance levels (L1-L4) - Go reference implementation: 22 packages - 42 signed conformance test vectors (real Ed25519 + SHA-256) - Python SDK: @acp_tool for LangChain, RunContext for Pydantic AI, ACPToolDispatcher for MCP - Docker image
The non-escalation requirement in delegation chains is the part I found most interesting to formalize — a delegatee cannot grant capabilities they don't hold, verified cryptographically at each hop.
Paper (DOI): https://doi.org/10.5281/zenodo.19072332 Spec + code: https://github.com/chelof100/acp-framework-en
chelof100•1h ago
The core: every action must satisfy simultaneously — identity → capability scope → delegation chain → risk policy → execution token
The part I found most interesting to formalize: non-escalation in delegation chains. A sub-agent cannot grant capabilities it doesn't hold, verified at each hop with Ed25519 signatures. This makes privilege escalation attacks detectable at the protocol level.
What's published: - Spec v1.11: 36 documents, 4 conformance levels - Go reference impl: 22 packages, 42 signed test vectors - Python SDK for LangChain, Pydantic AI, MCP - Paper: https://doi.org/10.5281/zenodo.19072332
Happy to discuss the threat model or the delegation chain formalization.