A fun thing to keep in mind about software security is that it's premised on the existence of a determined and amoral adversary. In the long-long ago, Richard Stallman's host at MIT had no password; anybody could log into it. It was a statement about the ethics of locking down computing resources. That's approximately the position any security practitioner would be taking if they attempted to moralize against LLM-assisted offensive computing.
rms@gnu.ai.mit.edu
It was actually the A.I. Lab at M.I.T. and they already had their own dedicated subdomain for it. This had to have been around 1990-91. And IIRC, the actual admins made a valiant effort to keep all the shell users away from "root" privileges, so it wasn't a total dumpster fire and the system stayed alive, mostlyhttps://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artif...
Sure, it was basically "a poster on the wall" for the US Air Force, and the US Army guy on Usenet shared nothing about his actual Ballistics Research Labs experiments, but for a college freshman kid, I'd never been on a way k00ler bboard, doodz!!1
"a determined and amoral adversary" - I'd kinda disagree with this (the amoral adversary part being necessary). If you crawl through the vast data breach notification lists that many states are starting to keep - MA, ME, etc. there are so many of them (like literally daily banks, hospitals, etc. are having to report "data breaches" that never ever make the news) - not all of them are happening cause of ransomware. Sometimes it's just someone accidentally not locking a bucket down or not putting proper authorization on a path that should have it. It gets found/fixed but they still have to notify the state. However, if someone doesn't know what they are looking at, or it's a program so it really has no clue what it's looking at and just sees a bunch of data - there's no malicious intent but that doesn't mean that bad things can't happen because that data has now leaked out.
Guess what a lot of these LLMs are training on?
So while Andrey's software is finding all sorts of interesting stuff there's a bunch of crap being generated inadvertently that is just bad.
Somebody should pitch that to YC.
The argument that LLMs will enable "super powered" malware and that existing security solutions won't be able to keep up, is completely overblown. I see 0 evidence of this being possible with the current incarnation of "AI" or LLMs.
"Vide coded" malware will be easier to detect if the people creating it don't understand what the code is actually doing and will result in incredible amount of OpSec fails when the malware actually hits the target systems.
I do agree that "vide coding" will accelerate malware development and generally increase the amount of attacks to orgs. However if you're already applying bog-standard security practices like defense in depth, you shouldn't be concerned about this. If anything, you might want to start thinking about SOC automations in order to reduce alert fatigue.
Stay far away from anyone trying to sell you products to defend against "AI enabled malware". As of right now it's 100% snake oil.
Also, this is probably one of the cringiest articles on the subject I've ever read and is only meant to spread FUD.
I do find the banner video extremely entertaining however.
What bothers me the most about this article is that the tools that attackers use to do stuff like find 0days in code are the same tools that defenders can use to find the 0day first and fix it. It's not like offensive tooling is being developed in a vacuum and the world is ending as "armies of script kiddies" will suddenly drain every bank account in the world. Automated defense and code analysis is improving at a similar rate as automated offense.
In this awful article's defense though, I would argue that red team will always have an advantage over blue team because blue team is by definition reactionary. So as tech continues it's exponential advancements, the advantage gap for the top 1% red teamers is likely to scale accordingly.
It will be extremely interesting to see how vulnerability discovery evolves with LLMs but the whole "sky is falling hide your kids" hype cycle is ludicrous.
I'm having trouble reconciling what you wrote here with that result. Also with my own experiences, not necessarily of finding kernel vulnerabilities (I haven't had any need to do that for the last couple years), but of rapidly comprehending and analyzing kernel code (which I do need to do), and realizing how potent that ability would have been on projects 10 years ago.
I think you're wrong about this.
Also if you throw these models at enough code bases, they will probably get lucky a couple times.. So far every claim I have seen didn’t stand up to rigorous scrutiny. People find one bug then inflate their findings and write articles that would make you think they are far more affective than reality and I am tired of this hype.
CURL had to stop accepting bounties after it found nearly all of em were just AI generated nonsense…
Also I stated that they indeed provide very large gains in certain areas. Like writing a fuzz harness and reversing binaries. I am not saying they have absolutely no utility I am simply tired of grifters attempting to inflate their findings for clout. Shit has gotten out of control.
If you can reliably get x% lucky finding vulnerabilities for Y$ cost, then you simply scale that up to find more vulnerabilities.
I want to stop being elliptical and just say directly: you have strange and counterfactual ideas about the security research community of the mid-aughts. 2006-2008 was the height of the security blogosphere. This stuff definitely had a huge community.
You know, there are some pretty crazy run rates out there.
vouaobrasil•1d ago
This is just it: AI, while providing some efficiency gains for the average user, will become simply too powerful. Imagine a superpower that allows you to move objects with your mind. That could be a very nice thing to have for many to have, because you could probably help people with it. That's the attitude many hacker-types take. The problem is, it allows people to also kill instantly, which means that telekinesis would just be too powerful to juxtapose against our animal instincts.
AI is just too powerful – and if more people took a serious stand against it, it might actually be shut down.
rco8786•1d ago
vouaobrasil•1d ago
sgjohnson•1d ago
The Pandora’s Box is open. It’s over.
blibble•1d ago
unless you plan to never update again
diggan•1d ago
A software update collaborated on by Microsoft, Apple + countless of volunteer groups managing various other distributions?
The cat really is out of the bag. You could probably make it a death penalty in the whole world and some people would still use it secretly.
Once things like this run on consumer hardware, I think it's already too late to pull it down fully. You could regulate it though and probably have a better chance of limiting the damages, not sure an outright ban could even have the effect you want with a ban.
blibble•1d ago
yes you won't get people that won't ever update, but you'll get the overwhelming majority
and the hardware the never-updaters use will eventually fail and won't be able to be replaced
also: ban the release of new "open" models, they will slowly become out of date and useless
combine these, and the problem will solve itself over time
diggan•1d ago
Models released today are already useful for a bunch of stuff, maybe over the course of 100 year they could be considered "out of date", but they don't exactly bitrot by themselves because they sit on a disk, not sure why'd they suddenly "expire" or whatever you try to hint at.
And even over the course of 100 year, people will continue the machine learning science, regardless if it's legal or not, the potential benefits (for a select few) seems to be too good for people to ignore, which is why the current bubble is happening in the first place.
blibble•1d ago
> And even over the course of 100 year, people will continue the machine learning science
the weak point is big tech: without their massive spending the entire ecosystem will collapse
so that's what we target, politically, legally, technologically and regulatory
we (humanity) only need to succeed in one of these domains once, then their business model becomes nonviable
once you cut off the snake's head, the body will die
the boosters in search of a quick buck will then move onto the next thing (probably "quantum")
diggan•1d ago
blibble•1d ago
agreement isn't needed
its success sows the seeds of its own destruction, if it starts eating the middle class: politicians in each and every country that want to remain electable will move towards this position independently of each other
> and under-estimate how far people are willing to go to make anything survive even when lots of people want that thing to die.
the structural funding is such that all you need to do is chop off the funding from big tech
the nerd in their basement with their 2023 macbook is irrelevant
vouaobrasil•1d ago
svachalek•1d ago
rco8786•14h ago
rglover•1d ago
[1] https://www.youtube.com/watch?v=0jg3mSf561w
hluska•1d ago
vouaobrasil•1d ago
diggan•1d ago
The "system" isn't a thing, but more like running apps, some run on servers, other consumer hardware. And the parts that run on consumer hardware will be around even if 99% of the current hyped up ecosystem dies overnight, people won't suddenly stop trying to run these things locally.
rglover•1d ago
I get the general "too many variables" argument, but the idea that humans have no means of stopping any of these apps/systems/algorithms/etc if they get "out of control" (a farce in itself as it's a chat bot) is ridiculous.
It's very interesting to see how badly people want to be living in and being an active participant in a sci-fi flick. I think that's far more concerning than the AI itself.
vouaobrasil•1d ago
Animats•1d ago
Yes. Look at how much trouble we have now with distributed denial of service attacks.
Go re-read "Daemon" and "Freedom™", by Daniel Suarez (2006). That AI is dumber than what we have now.
vouaobrasil•1d ago
rco8786•14h ago
LLM code already runs on millions of servers and other devices, across thousands of racks, hundreds of data centers, distributed across the globe under dozens of different governments, etc. The open source models are globally distributed and impossible to delete. The underlying math is public domain for anyone to read.
rglover•8h ago
The power switch is still king, even if it's millions of power switches versus one.