Trust layers for LangChain, CrewAI, AutoGen, OpenAI Agents SDK, and RAG pipelines — each is a pip install that hooks into your existing agent code with ~3 lines of setup HMAC-SHA256 tamper-evident audit chains — every agent decision, tool call, and LLM interaction gets logged to a chain that regulators can verify ConsentGate — risk-classifies tool calls and blocks critical operations until approved InjectionDetector — 15+ weighted patterns scanning prompts before they reach the model WriteGate + DriftDetector (for RAG) — prevents knowledge base poisoning and detects retrieval anomalies Compliance scanner — pip install air-compliance && air-compliance scan ./my-project tells you exactly which articles you're missing
Everything maps to specific EU AI Act articles (9, 10, 11, 12, 14, 15). Zero vendor lock-in, Apache 2.0, zero core dependencies on the trust layers. The scanner is probably the fastest way to understand where your gaps are. It takes about 3 seconds to run on a typical project. GitHub: https://github.com/airblackbox PyPI: pip install air-compliance Happy to answer questions about what the EU AI Act actually requires for AI agent deployments — we've read the full regulation and mapped it to specific technical controls.
sarunasch•1h ago