Over the past year, we've all gotten used to AI writing code.
What we haven't solved is this:
Who decides whether that code should ship?
Guard is an open-core governance layer that sits between AI agents and execution.
It does not generate code. It scores risk, detects drift, enforces policy, and can block execution via exit codes.
Core concepts:
- Decision Snapshot (what AI intends to do) - Risk v1 scoring (structural risk) - Policy enforcement (hard/soft blocks) - Strict JSON contract - Drift signal tracking - Override receipts
Guard works CLI-first and integrates directly into existing dev workflows.
The goal is simple:
Before AI touches production, something evaluates it.
I'd appreciate feedback from anyone building AI-native systems. Especially: Is governance something you feel missing in current AI coding tools?
Repo: mindforge.run