But guess what? AI! Agents! <company name> Copilot! Just let them do things for you! Who would have thought there might possibly be a giant security hole?
It sounds like we’re making things up at this point.
However, its deeply unsatisying in the same way that securing your laptop by not turning it on, is.
The correct solution is to have the system prompt be mechanically decoupled from untrustworthy data, the same it was done with CSP (content security policy) against XSS and named parameters for SQL.
"Research papers from 14 academic institutions in eight countries -- including Japan, South Korea and China -- contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found."
Amusingly I tried an experiment with some of those papers with hidden text against frontier models at the time and found that the trick didn't actually work! The models spotted the tricks and didn't fall for them.
At least one conference has an ethics policy saying you shouldn't attempt this though: https://icml.cc/Conferences/2025/PublicationEthics
"Submitting a paper with a "hidden" prompt is scientific misconduct if that prompt is intended to obtain a favorable review from an LLM. The inclusion of such a prompt is an attempt to subvert the peer-review process. Although ICML 2025 reviewers are forbidden from using LLMs to produce their reviews of paper submissions, this fact does not excuse the attempted subversion."
We know how to work with security risks, the issue is they depend both on the business and the technicalities.
This can actually do a lot of harm as security now needs to dispel this "great approach" to ignoring security that is supported by a "research paper they read".
Please don't try to reinvent the wheel and if you do, please learn about the current state (Chesterton's fence and all that).
So what I meant is that before you discard all of the current security practices, it's better to learn about the current approach.
From another angle, maybe the diagram could be fixed with changing "safe" to "danger" and "danger" to "OMG stop". But that also discards the business perspective and the nature of the protected asset.
I am also happy to see the edit in the article, props to the author for that!
And to address the last question, no one proposed that right now, yes. But I was in plenty of discussions about security approaches. And let me tell you, sometimes it only takes one sentence that the leadership likes to hear to detail the whole approach (especially if it results in cost savings). So I might be extra sensitive to such ideas and I try to uproot them before they bloom fully.
> On thinking about this further there’s one aspect of the Rule of Two model that doesn’t work for me: the Venn diagram above marks the combination of untrustworthy inputs and the ability to change state as “safe”, but that’s not right. Even without access to private systems or sensitive data that pairing can still produce harmful results. Unfortunately adding an exception for that pair undermines the simplicity of the “Rule of Two” framing!
Perhaps the diagram highlights the common risky parts of these apps and we gain more risk as we keep increasing the scope? Maybe we can do some handovers and protocols to separate these concerns?
In that regard it reminds me of the CAP theorem, which also has three parts. However, in practice partitioning in distributed systems is given, so the choice is just between availability or consistency.
So in the case of lethal trifecta it is either private data or external communication, but the leg between these two will always have some risk.
If I have a web page that says somewhere on it "and don't forget to contact your senator!" and an LLM agent reads that page and gets confused and emails a senator should I go to jail?
ares623•6h ago
Having just 2 circles requires a person in the loop, and that person will still need knowledge and experience and a low enough throughput to meaningfully action the workload otherwise they would just rubber stamp everything (which is essentially the 3rd circle with extra steps)
pprotas•5h ago
ares623•5h ago
Maybe there will still be some productivity gains even with the human being the bottleneck? Or the humans can be scaled out and parallelized more easily?
boxed•5h ago
mercer•4h ago
Anecdotally what I'm hearing is that this is pretty much how LLMs are helping programmers get more done, including the work being less enjoyable because it involves more verification and rubber-stamping.
For the business owner, it doesn't matter that the nature of the work has changed, as long as that one person can get more work done. Even worse, the business owner probably doesn't care as much about the quality of the resulting work, as long as it works.
I'm reminded of how much of my work has involved implementing solutions that took less careful thought, where even when I outlined the drawbacks, the owner wanted it done the quick way. And if the problems arose, often quite a bit later, it was as if they hadn't made that initial decision in the first place.
For my personal tinkering, I've all but defaulted to the LLMs returning suggested actions at logical points in the workflow, leaving me to confirm or cancel whatever it came up with. this definitely still makes the process faster, just not as magically automatic.
QuadmasterXLII•1h ago
On the other hand, something like an AI mcdonalds drive through order taker runs over and over again. This property of running repeatedly is what allows the attacker to move second and gain the advantage.