I've been talking to CISOs at banks and hospitals who are deploying LLMs without proper security. Their existing tools can't handle LLM-specific threats:
- Prompt injection (attackers manipulate LLM behavior, WAFs can't detect it) - Data exfiltration (sensitive data leaks through LLM responses) - Jailbreak attempts (users bypass safety guardrails using encoding)
Gartner predicts 60% of AI enterprises will face a security incident by 2027.
InferShield is a drop-in proxy with real-time threat detection, multi-encoding detection (Base64, hex, URL, Unicode), complete audit logs, and risk scoring. Self-hosted, provider-agnostic, zero code changes required.
v0.1 MVP is live today with 95%+ detection rate (red team tested). MIT licensed, free forever. Enterprise tier coming for compliance features.
Website: https://infershield.io Quick start: docker pull infershield/proxy:latest
Feedback welcome! What LLM security challenges are you facing?