But none of their efforts (which they call "disruptions") have anything to do with AI or models. They're just trying to catch bad actors after the fact by analyzing the prompts and replies their tool gave people.
There is no such thing as AI safety, any more than we could have typewriter safety. Bad typewriter content (ie: Ted Kaszinsky) must be assessed by the people's actions and constrained the same way -- not by tool limitations, which isn't really possible.
dstroot•4h ago