verify_policy: Z3 formal verification in one call
simulate_policy: test any JSON input, get ALLOWED/BLOCKED
explain_policy: human-readable breakdown of any policy
scaffold_policy: describe what you want in English, get a CSL template
This means you can do this from Claude Desktop or Cursor:
"Write me a policy that blocks transfers over $5000 for non-admin users"
→ scaffold generates a CSL template
→ verify proves it has no contradictions
→ simulate tests your edge cases
→ all without leaving your editor
The full loop: From English description to mathematically verified, runtime-enforced policy; happens inside your AI assistant.
Why not just prompt the LLM to enforce rules? We benchmarked GPT-4o, Claude Sonnet 4, and Gemini 2.0 Flash as guardrails with a hardened system prompt. Every model was bypassed by at least one attack (context spoofing, multi-turn role escalation, unicode homoglyphs). CSL-Core blocked all of them; because the LLM never touches the enforcement layer.
Setup:
pip install "csl-core[mcp]"
Claude Desktop config:
{
"mcpServers": {
"csl-core": {
"command": "csl-core-mcp"
}
}
}
Or with Docker:
docker build -t csl-core-mcp .
docker run -i csl-core-mcp
GitHub: https://github.com/Chimera-Protocol/csl-core
Listed on awesome-mcp-servers:
https://github.com/punkpeye/awesome-mcp-servers
Previous Show HN discussion: https://news.ycombinator.com/item?id=46963250
Would love feedback, especially from anyone building agentic systems where safety guarantees matter.
aytuakarlar•1h ago
It allows you to verify policies using the CLI without writing any Python code. I'd really appreciate any feedback on the DSL syntax or the verification approach. Thanks!