We built this because we realized that "prompt engineering" isn't enough for critical AI systems (like in finance or governance). You can't just ask an LLM nicely not to delete a database—you need a runtime guarantee.
CSL-Core is a policy language designed to bring "Policy-as-Code" to AI agents.
Instead of relying on the model's probabilistic nature, CSL enforces constraints that are:
1. Formally Verified: Policies are compiled into Z3 constraints to mathematically prove they have no logical conflicts or loopholes.
2. Deterministic: The checks happen in a separate runtime engine, independent of the LLM's context window.
3. Model Agnostic: It acts as a firewall between the LLM and your tools/API.
It's currently in Alpha (v0.2). Currently working on TLA+ specifications for the dual formal verification engine and governance architecture because we believe AI safety needs mathematical rigor.
I'd appreciate any feedback on the DSL syntax and our verification approach.
aytuakarlar•1h ago
It allows you to verify policies using the CLI without writing any Python code. I'd really appreciate any feedback on the DSL syntax or the verification approach. Thanks!