The report covers:
The attack surface of AI-agents and LLMs.
Exploitation pathways such as prompt injection, privilege misuse, and workflow manipulation.
The legal and technical difference between prompt injection and SQL injection.
Regulatory exposure (e.g. GDPR liability) when data leaks occur.
Mitigation strategies to reduce risk, including backend immune layers.
Full report: https://github.com/pablo-chacon/AI-Agent-Vulnerability-and-R...
Would love to hear feedback, especially from those working in AI security, infrastructure, or compliance.