I’ve been working on a project called OpenVerb, which explores an architectural idea for AI systems: separating reasoning from execution.
Most AI agent frameworks today focus on improving reasoning loops, planning, and orchestration (LangChain, LangGraph, etc.). But once an agent decides to perform an action, execution usually becomes a direct tool call, script, or API invocation.
That approach works, but it also creates some issues: • custom glue code for every integration • inconsistent action schemas • limited determinism in execution • difficult auditing and policy enforcement
OpenVerb experiments with treating actions as a protocol layer, not just function calls.
Instead of arbitrary tool calls, systems define structured verbs that describe: • the action being performed • required inputs • expected outputs • execution policies • audit information
Conceptually the architecture looks like this:
AI Model / Agent Framework ↓ Reasoning Layer ↓ OpenVerb (Action Protocol) ↓ System Execution
The idea is that agent frameworks control how the AI thinks, while OpenVerb standardizes how actions are executed.
Some existing projects touch related areas: • Model Context Protocol (MCP) – tool and data discovery for AI systems • LangGraph – deterministic reasoning loops for agents • PydanticAI – structured schemas for agent outputs
OpenVerb is trying to explore something slightly different: a universal grammar for deterministic execution that could work across domains (software systems, spatial systems, robotics, etc.).
Still early and experimental, but I’d love feedback from people thinking about agent architecture or execution reliability.
Curious if others have explored similar ideas or if there are related systems I should look at.