I’ve been working on an open-source project to explore a problem I keep running into with LLM systems in production:
We give models the ability to call tools, access data, and make decisions… but we don’t have a real runtime security layer around them.
So I built a system that acts as a control plane for AI behavior, not just infrastructure.
GitHub: https://github.com/dshapi/AI-SPM
What it does
The system sits around an LLM pipeline and enforces decisions in real time:
Detects and blocks prompt injection (including obfuscation attempts) Forces structured tool calls (no direct execution from the model) Validates tool usage against policies Prevents data leakage (PII / sensitive outputs) Streams all activity for detection + audit Architecture (high-level) Gateway layer for request control Context inspection (prompt analysis + normalization) Policy engine (using Open Policy Agent) Runtime enforcement (tool validation + sandboxing) Streaming pipeline (Apache Kafka + Apache Flink) Output filtering before response leaves the system
The key idea is:
Treat the LLM as untrusted, and enforce everything externally
What broke during testing
Some things that surprised me:
Simple pattern-based prompt injection detection is easy to bypass Obfuscated inputs (base64, unicode tricks) are much more common than expected Tool misuse is the biggest real risk (not the model itself) Most “guardrails” don’t actually enforce anything at runtime What I’m unsure about
Would really appreciate feedback from people who’ve worked on similar systems:
Is a general-purpose policy engine like OPA the right abstraction here? How are people handling prompt injection detection beyond heuristics? Where should enforcement actually live (gateway vs execution layer)? What am I missing in terms of attack surface? Why I’m sharing
This space feels a bit underdeveloped compared to traditional security.
We have CSPM, KSPM, etc… but nothing equivalent for AI systems yet.
Trying to explore what that should look like in practice.
Would love any feedback — especially critical takes.