We’ve been building LLMSafe — a Zero-Trust Security & Governance Gateway that sits between your application and an LLM model.
The problem we’re trying to solve:
Once you connect an LLM to real data or real users, you open the door to real risks: • prompt injection • phishing and social-engineering via LLM • data exfiltration • PII leakage • unsafe or non-compliant outputs • lack of auditability/governance
LLMs don’t have a native “security layer”, so we built one.
Our pipeline looks like this:
Client ↓ Firewall & risk detection (prompt injection, phishing patterns, unsafe intent) ↓ Normalization & safe rewrite ↓ Policy enforcement ↓ Inbound data protection (masking/scrubbing) ↓ LLM call ↓ Outbound data protection ↓ Response governance & filtering ↓ Audit logging (trace → decision → outcome)
Everything runs as a gateway so teams can deploy it inside their own infrastructure (Docker), instead of sending data to yet another SaaS.
We also log every decision so you can trace: input → layer → risk → block/allow → output.
Right now it can: • detect prompt injection attempts • detect phishing/social-engineering content • mask PII automatically • block risky outputs • enforce policy rules • provide a full audit trail
We are still actively building and refining. I’d really appreciate feedback — especially from people building real LLM products or working in security/compliance.
Demo + docs: https://llmsafe.cloud
Happy to answer technical questions and hear what we’re missing.