I also learned how, because of that, social engineering going up the ranks as the technique to hack systems. All systems are as weak as its weakest point and us humans became that.
Back to AI world, we are talking about bayesian machines conditioned on how humans communicate. To me, then, it’s reasonable to conjecture that techniques used to exploit humans such as social engineering will rapidly become the norm in exploiting AIs. An example of this for text models are prompt injection techniques, but they’ll become more complex as we introduce tool calling and multi-modality to our AIs.