link: https://github.com/V4bel/dirtyfrag
detailed writeup: https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...
importantly:
"Copy Fail was the motivation for starting this research. In particular, xfrm-ESP Page-Cache Write in the Dirty Frag vulnerability chain shares the same sink as Copy Fail. However, it is triggered regardless of whether the algif_aead module is available. In other words, even on systems where the publicly known Copy Fail mitigation (algif_aead blacklist) is applied, your Linux is still vulnerable to Dirty Frag."
mitigation (i have not tested or verified!):
"Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution. Use the following command to remove the modules in which the vulnerabilities occur."
sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; true"
conversation around the mitigation suggests you need a reboot or run this after the above on already-exploited machines: sudo echo 3 > /prox/sys/vm/drop_cachesAI is neat because it's higher signal but yeah no, we're not getting anywhere close to "safe linux", AI or not.
In fact, given the official public APIs, Google could replace the Linux kernel with a BSD, and userspace wouldn't notice, other than rooted devices, and the OEMs themselves baking their Android distro.
Not criticizing whoever found the bug, of course.
Which illustrates pretty well something that's lost when relying heavily on LLMs to do work for you: exploration.
I find that doing vulnerability research using AI really hinders my creativity. When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby. It's like a genie - you get exactly what you asked for and nothing more.
The researcher who discovered Copy Fail relied heavily on AI after noticing something fishy. If he had to manually wade through lots of code by himself, he would have many more chances to spot these twin bugs.
At the same time, I'm pretty sure that by using slightly less directed prompting, a frontier LLM would found these bugs for him too.
It's a very unusual case of negative synergy, where working together hurt performance.
The wrong thing got fixed for copy.fail, because people jumped to blame AF_ALG.
authencesn didn't get fixed. Now we got the results of that, turns out you can access the same (I believe) out of bounds write through plain network sockets.
I wish I thought of that, but I didn't.
baggy_trough•31m ago
2026-04-29: Submitted detailed information about the rxrpc vulnerability and a weaponized exploit that achieves root privileges on Ubuntu to security@kernel.org.
2026-04-29: Submitted the patch for the rxrpc vulnerability to the netdev mailing list. Information about this issue was published publicly.
2026-05-07: Submitted detailed information about the vulnerability and the exploit to the linux-distros mailing list. The embargo was set to 5 days, with an agreement that if a third party publishes the exploit on the internet during the embargo period, the Dirty Frag exploit would be published publicly.
2026-05-07: Detailed information and the exploit for the esp vulnerability were published publicly by an unrelated third party, breaking the embargo.
2026-05-07: After obtaining agreement from distribution maintainers to fully disclose Dirty Frag, the entire Dirty Frag document was published.
flumpcakes•20m ago
bawolff•13m ago
firer•9m ago
But this is very similar to Copy Fail, and I'm assuming there was an assumption that others might also discover this soon as well. Hence the urgency.
At least that's my charitable interpretation.