As models become more agentic, outputs often shift quietly from descriptive to prescriptive behavior — without any explicit signal that the system is now effectively taking action. Keyword filters and rule-based guardrails break down quickly in these cases.
Verdic is an intent governance layer that sits between the model and the application. Instead of checking topics or keywords, it evaluates:
whether an output collapses future choices into a specific course of action
whether the response exerts normative pressure (directing behavior vs explaining)
The goal isn’t moderation, but behavioral control: detecting when an AI system is operating outside the intent it was deployed for, especially in regulated or decision-critical workflows.
Verdic currently runs as an API with configurable allow / warn / block outcomes. We’re testing it on agentic workflows and long-running chains where intent drift is hardest to detect.
This is an early release. I’m mainly looking for feedback from people deploying LLMs in production, especially around:
agentic systems
AI governance
risk & compliance
failure modes we might be missing
Happy to answer questions or share more details about the approach.