It's a simple API that: 1. Scans prompts for injection attacks in real-time (heuristic + OpenAI moderation) 2. Detects when agents drift from their intended behavior 3. Has a kill switch for production incidents
Built for production use. Free tier available. Open source docs and examples.
The problem: Prompt injection is still breaking LLM apps in production. Most guardrails can be bypassed with simple tricks.
The solution: API-first security layer that sits between your app and the LLM.
SamiBuilds•55m ago