Today's AI agent guardrails are almost entirely post-hoc. The model proposes an action, then a policy layer tries to reject it. Prompt injection, jailbreaks, and edge cases mean the dangerous action is always one bypass away.
hibana-agent takes a different approach: dangerous actions (purchases, payments, infra changes) are not "blocked" — they are structurally unreachable. The execution path to the dangerous send literally does not exist in the compiled program unless the human-approval branch is taken. This is capability removal at the type level, not prompt-level persuasion.
Concrete demo: an AI agent browses freely, but add-to-cart requires a typed approval gate. The approval is a session-type branch — if the resolver selects "skip", there is no valid code path that reaches the add-to-cart send. Not at runtime, not through injection, not through any prompt trick. The code to execute it is simply not reachable.
A second demo shows enterprise expense approval with parallel Policy/Risk review, checkpoint/rollback, and explicit cancel semantics — all defined as one choreography.
Under the hood this is built on Hibana (https://github.com/hibanaworks/hibana), an Affine Multiparty Session Type runtime for Rust. The protocol is defined once, projected per role at compile time, and each step is consumed exactly once. But you don't need to know any of that to see the value: your agent physically cannot do what the protocol doesn't allow.
Would love feedback, especially from anyone building agent systems where a single wrong action is high-cost.