> * No large language model can reliably detect prompt injections
Interesting isn't it, that we'd never say "No database manager can reliably detect SQL injections". And that the fact it is true is no problem at all.
The difference is not because SQL is secure by design. It is because chatbot agents are insecure by design.
I can't see chatbots getting parameterised querying soon. :)
kaicianflone•1h ago
Is this where AgentSkills come into play as an abstraction layer?
CuriouslyC•49m ago
A big part of the problem is that prompt injections are "meta" to the models, so model based detection is potentially getting scrambled by the injection as well. You need an analytic pass to flag/redact potential injections, a well aligned model should be robust at that point.
niobe•1h ago
I would hope anyone with the knowledge and interest to run OpenClaw would already be mostly aware of the risks and potential solutions canvassed in this article, but I'd probably be shocked and disappointed.
Forgeties79•1h ago
There are definitely people I know who are talking about using it that I want nowhere near my keyboard
dgxyz•29m ago
Yeah that. I had an external "security consultant" (trained monkey) tell me the other day that something fucking stupid we were doing was fine. There are many many people who should not be allowed near keyboards these days.
chrisjj•4d ago
> Despite all advances:
> * No large language model can reliably detect prompt injections
Interesting isn't it, that we'd never say "No database manager can reliably detect SQL injections". And that the fact it is true is no problem at all.
The difference is not because SQL is secure by design. It is because chatbot agents are insecure by design.
I can't see chatbots getting parameterised querying soon. :)
kaicianflone•1h ago
CuriouslyC•49m ago