I built Cordum because I saw a huge gap between "AI Demos" and "Production Safety." Everyone is building Agents, but no one wants to give them write-access to sensitive APIs (like refunds, database deletions, or server management).
The problem is that LLMs are probabilistic, but our infrastructure requires deterministic guarantees.
Cordum is an open-source "Safety Kernel" that sits between your LLM and your execution environment. Think of it as a firewall/proxy for agentic actions.
Instead of relying on the prompt to "please be safe," Cordum enforces policy at the protocol layer: 1. It intercepts the agent's intent. 2. Checks it against a strict policy (e.g., "Refund > $50 requires human approval"). 3. Manages the execution via a state machine.
Tech Stack: - Written in Go (for performance and concurrency). - Uses NATS JetStream for the message bus. - Redis for state management.
It’s still early days, but I’d love your feedback on the architecture and the approach to agent governance.
Repo: https://github.com/cordum-io/cordum
Happy to answer any questions!
yaront111•1h ago