I'm interested to know if others are facing similar hurdles and how you are tackling them. Specifically:
- How do you turn abstract AI policies into specific, testable requirements for your development teams?
- Are you automating the enforcement of these AI specific policies within your CI/CD pipelines or are you primarily relying on post deployment monitoring?
- What specific tools, frameworks, or platforms are you using for this purpose?
- What other challenges are you encountering in operationalise AI risk management/governance in SDLC?
Thanks in advance!