I built Faramesh because I wanted a hard, cryptographic boundary between the agent's "brain" and my actual infrastructure. It intercepts tool-calls and forces them through a deterministic gate before any code runs. If the action isn't in your policy, it simply doesn't exist.
The biggest headache was canonicalization.py. LLMs are messy—one model sends a float as 1.0, another as 1.00, and it breaks the cryptographic hash every time. I wrote a normalization engine to ensure that identical intent produces the exact same byte-stream and hash.
It’s open source (Python/Node SDKs). I’m curious if people think this should live at the framework level or as a standalone proxy. Tear the code apart:
https://github.com/faramesh/faramesh-core
For theory lovers, I'd invite you to read a paper i published just recently titled "Faramesh: A Protocol-Agnostic Execution Control Plane for Autonomous Agent systems" (Link below)