LMAO. Kevin Beaumont roasted that "80% of ransomware attacks are now powered by AI" paper² so hard that MIT appears to have deleted it (link was working as recently as a day ago). The paper was so absurd I burst out laughing at the title. Then when I read their methodology I laughed even harder.
They basically took a sample of ransomware attacks then tried to figure out how many "used AI". Their definition of "used AI" was basically "the threat actors are known to use AI for anything in any capacity".
Their definition of "AI powered" was already dubious. But what's even more hilarious, they never even explained how they concluded that a threat actors was "using AI".
Many of the threat actors they cited as "using AI" were ones I personally tracked as part of my day job and can testify did not use AI.
Furthermore, they claim to have analyzed attacks across 2023-2024, but several ransomware groups they cited as "definitely using AI" died out prior to 2023. One even died out before the first GPT model was released.
While this specific claim and incident is especially egregious, it's only a small part of a growing trend. For a while now, tech companies have been disguising marketing blog posts as academic research, sometimes even publishing it via respected journals.
It's very hard to get people to take cybersecurity seriously when we have a bunch of cracked out corporate marketing bozos posting nonsense "research" to scientific journals.
DyslexicAtheist•3h ago
-----------------------------------------------
LMAO. Kevin Beaumont roasted that "80% of ransomware attacks are now powered by AI" paper² so hard that MIT appears to have deleted it (link was working as recently as a day ago). The paper was so absurd I burst out laughing at the title. Then when I read their methodology I laughed even harder.
They basically took a sample of ransomware attacks then tried to figure out how many "used AI". Their definition of "used AI" was basically "the threat actors are known to use AI for anything in any capacity".
Their definition of "AI powered" was already dubious. But what's even more hilarious, they never even explained how they concluded that a threat actors was "using AI".
Many of the threat actors they cited as "using AI" were ones I personally tracked as part of my day job and can testify did not use AI.
Furthermore, they claim to have analyzed attacks across 2023-2024, but several ransomware groups they cited as "definitely using AI" died out prior to 2023. One even died out before the first GPT model was released.
While this specific claim and incident is especially egregious, it's only a small part of a growing trend. For a while now, tech companies have been disguising marketing blog posts as academic research, sometimes even publishing it via respected journals.
It's very hard to get people to take cybersecurity seriously when we have a bunch of cracked out corporate marketing bozos posting nonsense "research" to scientific journals.
-----------------------------------------------
¹ https://www.linkedin.com/posts/malwaretech_lmao-kevin-beaumo...
² https://web.archive.org/web/20250708174552/https://cams.mit....