We’re a small team at Thyris working on open-source AI, with a focus on making AI systems safer and more practical to run in production.
While integrating LLMs into real-world applications, we kept running into the same issues: - sensitive data leaking into prompts or model outputs, - unsafe or non-compliant responses, - and brittle structured outputs (e.g. invalid JSON) breaking downstream systems.
To explore these problems, we built TSZ (Thyris Safe Zone), an open-source guardrails and data security layer that sits between applications and external AI or API systems.
At a high level, TSZ: - detects and redacts PII and secrets before data leaves your environment, - applies rule-based and semantic (AI-assisted) guardrails, - validates structured outputs against predefined schemas, - and returns clear signals (redacted output, metadata, blocked flag) so applications can decide how to proceed.
The project is fully open source (Apache 2.0) and self-hosted. We’re sharing it mainly to learn from others building LLM systems in production and to get feedback on what works and what doesn’t.
Repo: https://github.com/thyrisAI/safe-zone Docs: https://github.com/thyrisAI/safe-zone/tree/main/docs
Happy to answer questions or discuss alternative approaches people are using.