A canonical AgentAction data model A cryptographic chain integrity model (SHA256-linked, like a git commit graph) Explicit privacy boundaries — raw reasoning and parameters must never be stored, only their hashes An extensibility mechanism for LangChain, OpenAI Agents, Anthropic, etc.
AAP is explicitly NOT a policy engine, a certification body, or a SaaS. It's a primitive. The smallest possible unit of verifiable agent cognition. One design decision we want feedback on: we made abort a first-class decision type, equal to invoke_tool or delegate. An agent refusing to act is as auditable as an agent acting. Does this hold up in your mental model? Reference implementation (Python) + RFC: https://github.com/Thinklanceai/aap-python Tear it apart.