Hey HN, I'm an AI engineer in Berlin building enterprise LLM systems. Built this because I kept hitting the same problem: classification tools assess risk once, governance dashboards track inventory, but nothing enforces compliance at runtime on every LLM call.
It's a Python middleware that wraps OpenAI/Azure/Anthropic/LangChain calls with audit logging, content policy, disclosure, and human escalation.
GitHub: https://github.com/Sagar-Gogineni/agentguard
Honest limitations: built-in keyword matcher misses nuanced threats (I show this in the article). That's why there's a custom classifier hook for Azure Content Safety / OpenAI Moderation / LLM-as-judge.
Happy to answer questions.
rishi_gogi•2h ago