I’ve been working in DevOps and infrastructure for years (currently in the fintech/security space), and as I started playing with AI agents, I noticed a scary pattern. Most "safety" mechanisms rely on system prompts ("Please don't do X") or flimsy Python logic inside the agent itself.
If we treat agents as autonomous employees, giving them root access and hoping they listen to instructions felt insane to me. I wanted a way to enforce hard constraints that the LLM cannot override, no matter how "jailbroken" it gets.
So I built Cordum. It’s an open-source "Safety Kernel" that sits between the LLM's intent and the actual execution.
The architecture is designed to be language-agnostic: 1. *Control Plane (Go/NATS/Redis):* Manages the state and policy. 2. *The Protocol (CAP v2):* A wire format that defines jobs, steps, and results. 3. *Workers:* You can write your agent in Python (using Pydantic), Node, or Go, and they all connect to the same safety mesh.
Key features I focused on: - *The "Kill Switch":* Ability to revoke an agent's permissions instantly via the message bus, without killing the host server. - *Audit Logs:* Every intent and action is recorded (critical for when things go wrong). - *Policy Enforcement:* Blocking actions based on metadata (e.g., "Review required for any transfer > $50") before they reach the worker.
It’s still early days (v0.x), but I’d love to hear your thoughts on the architecture. Is a separate control plane overkill, or is this where agentic infrastructure is heading?
Repo: https://github.com/cordum-io/cordum Docs: [Link to your docs if you have them]
Thanks!
hackerunewz•1h ago
yaront111•1h ago