In our case, tools like LlamaFirewall were helpful, but they didn’t scale into real workflows — we missed having a “Detection as Code” approach, being able to reuse existing detection rules and align with frameworks like MITRE ATLAS or the OWASP LLM Top 10.
So we hacked together an open-source framework (AIDR-Bastion). It’s not perfect, but it lets us test ideas faster: multiple detection pipelines mixing rule-based checks, ML models, vector similarity and classifiers, with Sigma & Roota rule support and some basic integration for classification and logging. It can run as a local logging sensor and perform allow/block/notify actions based on rules.
This works well enough for us, but GenAI security isn’t our core business, so we open-sourced it to see if the community could take it further. Right now we’re experimenting with API rule sync, Apache Kafka streaming, and broader rule support (NOVA, YARA-L).
I’ve been in security for 20+ years (programmer → security admin → auditor → now CISO), but open source is new territory for me — so I’d love feedback: - How are you securing GenAI systems in your environment? - What’s worked (or not) for you?
We open-sourced it here if anyone wants to take a look or contribute: https://github.com/0xAIDR/AIDR-Bastion