"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
How about not enabling generating such content, at all?
I don't know how common this is, or what the prompt was that inadvertently generated nudes. But it's at least an example where you might not blame the user
I know they said “without being prompted” here but if you click through you’ll see what the person actually selected (“spicy” is not default and is age-gated and opt-in via the nsfw wall).
Very weird for Taylor Swift...
The questions then, for me, are:
* Is Grok considered a tool for the user to generate content for X or is Grok/X considered similar to a vendor relationship
* Is X more like Backpage (not protective enough) than other platforms
I’m sure this is going to court, at least for revenge porn stuff. But why would anyone do this to their platform? Crazy. X/Twitter is full of this stuff now.
Note that things change. In the early days of twitter (pre X) they could get away with not thinking of the issue at all. As technology to detect CSAM marches on they need to use it (or justify why it shouldn't be used - too many false positives???). As a large platform for such content they need to push the state of the art in such detection.. At no point do they need perfection - but they need to show they are doing their reasonable best to stop this.
The above is of course my opinion. I think the courts will go a similar direction, but time will tell...
Which he does and responded with “I will blame and punish users.” Which yeah, you should, but you also need to fix your bot. He’s certainly has no issue doing that when Grok outputs claims/arguments that make him look bad or otherwise engages in what he considers “wrongthink,” but suddenly when there are real, serious consequences he gets to hide behind “it’s just a user problem”?
This is the same thing YouTube and social media companies have been getting away with for so long. They claim their algorithms will take care of content problems, then when they demonstrably fail they throw their hands up and go “whoops! Sorry we are just too big for real people to handle all of it but we’ll get it right this time.” Rinse repeat.
Some of these things are going into the ENFORCE act, but it's going to be a muddy mess for a while.
See a lawyer for legal details of course.
Women should be able to exist in public without having to constantly have porn made of their likeness and distributed right next to their activity.
I replied to:
> They don’t seem to have taken even the most basic step of telling Grok not to do it via system prompt.
“It” being “generating CSAM”.
I was not attempting to comment on some random censorship debate,
but instead: that CSAM is a pretty specific thing.
With pretty specific legal liabilities, dependent on region!
Surprising, usually the system automatically bans people who post CSAM and elon personally intervenes to unban then.
https://mashable.com/article/x-twitter-ces-suspension-right-...
Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.
IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.
I understand everyone pouncing when X won't own Grok's output, but output is directly connected to its input and blame can be proportionally shared.
Isn't this a problem for any public tool? Adversarial use is possible on any platform, and consistent law is far behind tech in this space today.
From my knowledge (albeit limited) about the way LLMs are set up, they most definitely have abilities to include guardrails of what can't be produced. ChatGPT has some responses to prompts which stops users from proceeding.
And X specifically: there have many cases of X adjusting Grok where Grok was not following a particular narrative on political issues (won't get into specifics here). But it was very clear and visible. Grok had certain outputs. Outcry from certain segments. Grok posts deleted. Trying the same prompts resulted in a different result.
So yeah, it's possible.
The guardrails have mostly worked. They have never ever been reliable.
I’m just wondering if from a technical perspective it’s even possible to do it in a way that would 100% solve the problem, and not turn it into an arms race to find jailbreaks. To truly remove the capability from the model, or in its absence, have a perfect oracle judge the output and block it.
The answer is currently no, I presume.
For arguments sake, let's assume Grok can't reliably have guardrails in place to stop CSAM. There could be second and third order review points where before an image is posted by Grok, another system could scan the image to verify whether it's CSAM or not, and if the confidence is low, then human intervention could come into play.
I think the end goal here is prevention of CSAM production and dissemination, not just guardrails in an LLM and calling it a day.
The problem is that these guardrails are trivially bypassed. At best you end up playing a losing treadmill game against adversarial prompting.
Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
It feels somewhat more clearcut when you say to AI, "Draw me an image of Mickey Mouse", but why is that different than photocopying a picture of Mickey Mouse, and using Photoshop to draw a picture of Mickey Mouse? Photo copiers will block copying a dollar bill in many cases - should they also block photos of Mickey Mouse? Should they have received firmware updates whenever Steamboat Willy fell into public domain, such that they can now be allowed to photocopy that specific instance of Mickey Mouse, but none other?
This is a slippery slope, the idea that a person using the tool should hold the tool responsible for creating "bad" things, rather than the person themselves being held responsible.
Maybe CSAM is so heinous as to be a special case here. I wouldn't argue against it specifically. But I do worry that it shifts the burden of responsibility onto the AI or the model or the service or whatever, rather than the person.
Another thing to think about is whether it would be materially different if the person didn't use Grok, but instead used a model on their own machine. Would the model still be responsible, or would the person be responsible?
I agree, but I don't know where that line is.
So, back in the 90s and 2000s, you could get The Gimp image editor, and you could use the equivalent of Word Art to take a word or phase and make it look cool, with effects like lava or glowing stone, or whatever. The Gimp used ImageMagick to do this, and it legit looked cool at the time.
If you weren't good at The Gimp, which required a lot of knowledge, you could generate a cool website logo by going to a web server that someone built, giving them a word or phrase, and then selecting the pre-built options that did the same thing - you were somewhat limited in customization, but on the backend, it was using ImageMagick just like The Gimp was.
If someone used The Gimp or ImageMagick to make copyrighted material, nobody would blame the authors of The Gimp, right? The software were very nonspecific tools created for broad purposes, that of making images. Just because some bozo used them to create a protected image of Mickey Mouse doesn't mean that the software authors should be held accountable.
But if someone made the equivalent of one of those websites, and the website said, "click here to generate a random picture of Mickey Mouse", then it feels like the person running the website should at least be held partially responsible, right? Here is a thing that was created for the specific purpose of breaking the law upon request. But what is the culpability of the person initiating the request?
Anyway, the scale of AI is staggering, and I agree with you, and I think that common decency dictates that the actions of the product should be limited when possible to fall within the ethics of the organization providing the service, but the responsibility for making this tool do heinous things should be borne by the person giving the order.
sorry you're not convincing me. X chose to release a tool for making CSAM. they didn't have to do that. They are complicit.
Truly, civilization was a mistake. Retvrn to monke.
You can have all the free speech in the world, but not with the vulnerable and innocent children.
I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.
Same, honestly. And you'll probably catch a whole lot of actual legitimate usage in that net, but it's worth it.
But you'll also miss some. You'll always miss some, even with the best guard rails. But 99% is better than 0%, I agree.
> ... and just expect the user to use it legally?
I don't think it's entirely the responsibility of the builder/supplier/service to ensure this, honestly. I don't think it can be. You can sell hammers, and you can't guarantee that the hammer won't be used to hurt people. You can put spray cans behind cages and require purchasers to be 18 years old, but you can't stop the adult from vandalism. The person has to be held responsible at a certain point.
Pornography is regulated. CSAM is illegal. Hosting it on your platform and refusing to remove it is complicity and encouragement.
There's also a difference between a tool manufacturer (hardware or software) and a service provider: once the tool is on the user's hands, it's outside of the manufacturer's control.
In this case, a malicious user isn't downloading Grok's model and running it on their GPU. They're using a service provided by X, and I'm of the opinion that a service provider starts to be responsible once the malicious usage of their product gets relevant.
Historically tools have been uncensored, yet also incredibly difficult and time-consuming to get good results with.
Why spend loads of effort producing fake celebrity porn using photoshop or blender or whatever when there's limitless free non-celebrity porn online? So photoshop and blender didn't need any built-in censorship.
But with GenAI, the quantitive difference in ease-of-use results in qualitative difference in outcome. Things that didn't get done when it needed 6 months of practice plus 1 hour per image are getting done now it needs zero practice and 20 seconds per image.
If you operate the tool, you are responsible. Doubly so in a commercial setting. If there are issues like Copyright and CSAM, they are your responsibility to resolve.
If Elon wanted to share out an executable for Grok and the user ran it on their own machine, then he could reasonably sidestep blame (like how photoshop works). But he runs Grok on his own servers, therefore is morally culpable for everything it does.
Your servers are a direct extension of yourself. They are only capable of doing exactly what you tell them to do. You owe a duty of care to not tell them to do heinous shit.
Posting a tweet asking Grok to transform a picture of a real child into CSAM is no different, in my mind, than asking a human artist on twitter to do the same. So in the case of one person asking another person to perform this transformation, who is responsible?
I would argue that it’s split between the two, with slightly more falling on the artist. The artist has a duty to refuse the request and report the other person to the relevant authorities. If that artist accepted the request and then posted the resulting image, twitter then needs to step in and take action against both users.
There's one more line at issue here, and that's the posting of the infringing work. A neutral tool that can generate policy-violating material has an ambiguous status, and if the tool's output ends up on Twitter then it's definitely the user's problem.
But here, it seems like the Grok outputs are directly and publicly posted by X itself. The user may have intended that outcome, but the user might not have. From the article:
>> In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images—back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can’t even delete problematic images from the Grok account to prevent them from spreading, the programmer noted.
Overall, I think it's fair to argue that ownership follows the user tag. Even if Grok's output is entirely "user-generated content," X publishing that content under its own banner must take ownership for policy and legal implications.
So exactly who is considered the originator is a pretty legally relevant question particularly if Grok is just off doing whatever and then posting it from your input.
"The persistent AI bot we made treated that as a user instruction and followed it" is a heck of a chain of causality in court, but you also fairly obviously don't want to allow people to laundry intent with AI (which is very much what X is trying to do here).
If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?
I think a court is going to step in and help answer these questions sooner rather than later.
At least I think that's the plan.
There's a line we have to define that I don't think really exists yet, nor is it supported by our current mental frameworks. To that end, I think it's just more sensible to simply forbid it in this context without attempting to ground it. I don't think there's any reason to rationalize it at all.
you going to ban all artsy software ever because a bad actor has or can use it to do bad actor things?
Also, punishment is a rather inefficient way to teach the public anything. The people who come through the gate tomorrow probably won't know about the punishment. It will often be easier to fix the environment.
Removing troublemakers probably does help in the short term and is a lot easier than punishing.
X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.
> Isn't this a problem for any public tool? Adversarial use is possible on any platform
Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."
Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.
1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.
2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.
3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.
Do yourself a favor and not Google that.
Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.
It may shock you to learn that bigamy and sky-burials are also quite illegal.
Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.
Also, where are all the state attorneys general?
(And whatever my timeline has become now is why I don't visit more often, wtf, used to only be cycling related)
Edit: just to bring receipts, 3 instances in a few scrolls: https://x.com/i/status/2007949859362672673 https://x.com/i/status/2007945902799941994 https://x.com/i/status/2008134466926150003
I'm sure "The only people who say it's not are <x>" is an abominable thought pattern Nazis and similar types would love everyone to have. It makes for a great excuse to never weigh things on their merits, so I'm not sure why you feel the need to invoke it when the merits are already in your court. I can't look at these numbers https://i.imgur.com/hwm2bI5.png and conclude most Americans are Nazi's instead of being willing to accept perhaps not everyone sees it the same way I do even if they don't like Nazis either.
To any actual Nazi supporters out there: To hell with you
To anybody who thinks either everyone agrees with what they see 100% of the time or they are a literal Nazi: To hell with you as well
So yeah, I believe there are a LOT of Nazi-adjacent folks in this country: they're the ones who voted for Trump 3 times even after they knew he was a fascist piece of garbage.
- Even assuming all who weren't sure (13%) should just be discounted as not having an opinion, like those who had not heard about it (22%), 32% is still not a majority of the remaining (100%-13%-22%) = 65%. 32% could have been a plurality of those with an opinion, but since you insisted on lumping things into 3 buckets of 32%, 35%, and remaining %, the remaining % of 33% would actually get the plurality of those who responded with opinions by this definition.
N.b. If just read straight from the sheet, "A Nazi salute" would have already had a plurality. Though grouping like this is probably the more correct thing to do, it actually ends up significantly weakening the overall position of "more people agree than not" rather than strengthening it.
- But, thankfully, "A Nazi Salute" + "A Roman Salute" would actually have been 32+2=34%, so plurality is at least restored by more than one whole percentage point (if you excluded the unsure or unknowing)!
- However, a "Roman salute" (which is a bit of a farce of a name really) can't really be assumed to be fungible with the first option in this poll. If it were fully fungible, it could have been combined into that option. I.e. there's no way to tell which adults responding "A Roman salute" meant to be counted as "a general fascist salute, as the Nazis later adopted" or meant to be counted as "a non-fascist meaning of the salute, like the Bellamy salute was before WWII". So whichever wins this game of eeking out percentage points comes down to how each person wants to group these 2 percentage points. Shucks!
- In reality, between error margins and bogus responses, this is about as close as one could expect to get for an equal 3 way split between "it was", "it wasn't", and "dunno/don't care", and pulling ahead a percentage point or two is really quite irrelevant beyond that it is, blatantly, not actually a majority that agree it was a Nazi-style salute.
Even though I'm one who agrees with you Elon exhibits neo-nazi tendencies, the above just shows how we go from "Elon replies directly supporting someone in a thread about Hitler being right about the Jewish community" and similar things constantly for years to debating individual percentage points to try to claim our favorite sub-majority says he likely made a one off hand gesture 3 years ago. Now imagine I was actually a Nazi supporter walking into the thread - suddenly we've gone from talking about direct pro-Nazi statements and retweets constantly in his feed to a chance for me to debate with you whether the majority think he made a one off hand gesture 3 years ago? Anyone concerned with Musk's behavior should want to avoid this topic with a 20 foot pole so they can get straight to the real stuff.
Also... I've run across a fair share of crypto lovers who turn out to be neo-nazish, but I'm not sure how you're piecing together that such a large portion of the population is a "crypto-Nazi" when something like only 28% of the population has crypto at all, let alone is a Nazi too. At least we're past "anyone who disagrees with my interpretations can only be doing so as a Nazi" though.
Thanks for the note!
Whether HN wants to endorse a political ideology or not, their approach to handling these issues is a material support of the ideologies these stories and comments are criticizing.
There are all sorts of approaches that a moderation team could take if they actually believed this was a problem. For example, identify the users who regularly downvote/flag stories like this that end up being cleared by the moderation team for unflagging or the 2nd chance queue and devalue their downvotes/flags in the future.
This just describes HN as a whole, so if this is the concern, might as well shut the site down.
I think the biggest thing HN could do to stop this problem is to not make flagging affect an article's ranking until after a human mod reviews the flags and determines them to be appropriate. Right now, all bad actors apparently have to do is be quick on the draw, and get their flagging ring in action ASAP. I'm sure any company's PR team (or motivated Elon worshiper) can buy "100 HN flags on an article" on the dark web right now if they wanted to.
In 2020, Dang said [1]
> Voting ring detection has been one of HN's priorities for over 12 years: [...]
> I've personally spent hundreds of hours working on this, as well as tracking down voting rings of every imaginable sort. I'd never claim that our software catches everything, but I can tell you that it catches so much that I often go through the lists to find examples of good projects that people were trying ineptly to promote, and invite them to do it again in a way that is more likely to gain community interest.
Of course this sort of thing is inherently heuristic; presumably bots throw up a smokescreen of benign activity, and sophisticated bots could present a very realistic, human-like smokescreen.
I (and others) were arguing that the Trump administration is probably, and unfortunately, the most relevant topic to the tech industry on most any given day. This is because computer is mostly made out of people. The message that these political stories intersect deeply with technology (as is seen here) seems to have successfully gotten through.
I wish the most relevant tech story of every day were, say, some cool new operating system, or something cool and curiosity-inspiring like "you can sort in linear time" or "python is an operating system" or "i made X rewritten in Y" or whatever.
I think in most things, creation is much harder than destruction, but software and software systems are an exception where one individual can generally do more creation than destruction. So, it's particularly interesting (and jarring) when a few individuals are able to make decisions that cause widespread destruction.
We should collectively be proud that we have a culture where creation is easier than destruction. But it's also why the top stories of any given day will be "Trump did X" or "us-east-1 / cloudflare / crowdstrike is down" or "software widely used in {phones / servers} has a big scary backdoor".
Now that you mention it - I've noticed the same on Youtube ... I used to get suspended every 5 minutes on there.
"Major Silicon Valley Company's Product Creates and Publishes Child Porn" has nothing to do with politics. It's not "political content." It is relevant tech news when someone investigates and points out wrongdoing that tech companies are up to. If another tech company's product was doing this, it would be all over HN and there would be pretty much no flagging.
When these stories get flagged, it's because people don't want bad news to get out about the company--it's not about avoiding politics out of principle.
I'm not saying you're wrong about it being brigaded by PR bots, I'm saying it's still political. Hell, everything's political.
edit: back to 14, kinda crazy
But I generally consider something political if it involves politicians, or anyone being upset about anything someone else is doing, or any topic that they could mention on normal news. I prefer hn to be full of positive things that normal people don't understand or care about.
(As a long-term Musk-sceptic, I can confirm that Musk-critical content tended to get insta-flagged even years before he was explicitly involved in politics.)
Kinda like the scientists building the atomic bomb.
They'll be in for a rude awakening.
Like, the entirety of DOGE was such an obviously terrible series of events, but for whatever reason, the above were both big cheerleaders on Twitter.
And yeah the moderation team here have been clearly letting everything Musk-related be flagged even after pushback. It's absolutely vile. I've seen many people try to make posts about the false flagging issue here, only to have those posts flagged as well (unapologetically, on purpose, by the mods themselves).
Personally I've never seen anything like this.
Once again - links are trivial to share.
Otherwise this is hearsay.
The ones I reported, I deleted the report emails so I can't help you at this moment. I don't know why you're surprised - you can go looking yourself and find examples
https://x.com/UpwardChanging posts Hitler content, 14 words, black sun graphics, swastikas, antisemitic content etc. 21k followers
https://x.com/hvitrulfur supportively reposts swastika content, white supremacism, anti-black racism, islamophobia, 14 words
https://x.com/unconquered_sol black sun, swastikas, fasces, hitler glorification. 70k followers
2. Seems like case of https://en.wikipedia.org/wiki/White_guilt which spills into racism/white-supremacy.
3. This is literally art. Not my taste of course.
OP's claim was X is swimming in hate speech.
p.s. communist symbols are banned in a lot of the world too (https://en.wikipedia.org/wiki/Bans_on_communist_symbols), yet this is ok for bluesky:
* https://bsky.app/profile/mikkel314.bsky.social/post/3mbe62hg...
* https://bsky.app/profile/gwynnstellar.bsky.social/post/3mb5p...
* https://bsky.app/profile/negatron00.bsky.social/post/3mbfnnh...
* https://bsky.app/profile/kyulen742.bsky.social/post/3mb4nkeg...
* https://bsky.app/profile/mommyanddaddyslittlebimbo.com/post/...
They are in here too. But thanks to moderation they are usually more subtle and use dog whistles or proxy targets.
Seems like bot behavior.
There’s one in this thread. A sibling to my comment.
I mean, honestly, you are wasting your time. Why would you expect the website run by the guy who likes giving Nazi salutes on TV to take down Nazi content?
There's no point trying to engage with Twitter in good faith at this point; only real option is to stop using and move on (or hang out in the Nazi bar, I guess).
Everything that is awful in the diff between X and Twitter is there entirely by decision and design.
It’s fundamentally just another way of boosting account engagement metrics by encouraging repliers to signal that they are smart and clued-in. But it seems to work exceptionally well because it’s inescapable at the moment.
I’m not sure if this is much worse than the textual hate and harassment being thrown around willy nilly over there. That negativity is really why I never got into it, even when it was twitter I thought it was gross.
I haven't seen Xi, but I am unfortunate enough to know that such an animated depiction of Maduro also exists.
These people are clearly doing it largely for shock value.
1. Denmark taxes its rich people and has a high standard of living.
2. Scammy looking ad for investments in a blood screening company.
3. Guy clearing ice from a drainpipe, old video but fun to watch.
4. Oil is not actually a fossil fuel, it is "a gift from the Earth"
5. Elon himself reposting a racist fabrication about black people in Minnesota.
6. Climate change is a liberal lie to destroy western civilization. CO2 is plant food, liberals are trying to starve the world by killing off the plants.
7. Something about an old lighthouse surviving for a long time.
8. Vaccine conspiracy theories
9. Outright racism against Africans, claiming they are too dumb to sustain civilized society without white men running it.
10. One of those bullshit AI videos where the AI doesn't understand how pouring resin works.
11. Microsoft released an AI that is going to change everything, for real this time, we promise.
12. Climate change denialism
13. A post claiming that the Africa and South America aren't poor because they were robbed of resources during the colonial era and beyond, but because they are too dumb to run their countries.
14. A guy showing how you can pack fragile items using expanding foam and plastic bags. He makes it look effortless, but glosses over how he measures out the amount of foam to use.
15. Hornypost asking Grok to undress a young Asian lady standing in front of a tree.
16. Post claiming that the COVID-19 vaccine caused a massive spike (5 million to 150 million) cases of myocarditis.
17. A sad post from a guy depressed that a survey of college girls said that a large majority of them find MAGA support to be a turn off.
18. Some film clip with Morgan Freeman standing on a X and getting sniped from an improbable distance
19. AI bullshit clip about people walking into bottomless pits
20. A video clip of a woman being confused as to why financial aid forms now require you to list your ethnicity when you click on "white", with the only suboptions being German, Irish, English, Italian, Polish, and French.
Special bonus post: Peter St Ogne, Ph. D claims "The Tenth Amendment says the federal government can only do things expressly listed in the Constitution -- every other federal activity is illegal." Are you wondering what federal activity he is angry about? Financial support for daycare.
So yeah, while it wasn't a total and complete loss it is obvious that the noise far exceeds the signal. It is maybe a bit of a shock just how much blatant climate change denialism, racism, and vaccine conspiracies are front page material. I'm saddened that there are people who are reading this every day and taking it to heart. The level of outright racism is quite shocking too. It's not even up for debate that black people are just plain inferior to the glorious aryan race on Twitter. This is supposedly the #1 news source on the Internet? Ouch.
Edit: Got the year wrong at the top of the post, fixed.
What to do about it is to point out to those people in the middle how badly things are being fucked up, preferably with how those mistakes link back to their pocketbook.
The CSAM machine is only a recent addition.
And you thought that was a different argument than "you shouldn't have worn that skirt if you didn't want to get raped"?
They weren't placed there by God.
Fuck X.
It's become a bit of a meme to do this right now on X.
FWIW (very little), it's also on a lot of male posts, as well. None of that excuses this behavior.
I assume the courts will uphold this anyway because Musk rich and cannot be held accountable for his actions.
So from technical wonder to just like a pen in one easy step. Wouldn’t it be great if you could tell the AI what not to output?
This has been tried extensively and has not yet fully worked. Google "ai jailbreaks".
The locks on my doors will fail if somebody tries hard enough. They are still valuable.
Only because of the broader context of the legal environment. If there was no prosecution for breaking and entering, they would be effectively worthless. For the analogy to hold, we need laws to throw coercive measures against those trying to bypass guard rails. Theoretically, this already exists in the Computer Fraud and Abuse Act in the US, but that interpretation doesn't exist quite yet.
Preventing 100%? Fail.
Reducing the number of such images by 10-25% or even more? I don’t think so.
Not to mention the experience you get to know what you can and what you can’t prevent.
And that vibe I mentioned in another comment is getting stronger and stronger.
And of course all of this is narrowly focused on CSAM (not that it should be minimized) and not on the fact that every person on X, the everything app, has been opened up to the possibility of non-consensual sexual material being generated of them by Grok.
They should disable it in the Netherlands in this case since it really sounds like a textbook slander case and the spreader can also be held liable. note: it's not the same as in the US despite using the same word, deepfakes have been proven as slander and this is no different. Especially if you know it's fake by using "AI". There have been several cases of pornographic deep fakes, all of which were taken down quickly, in which the poster/creator was sentenced. The unfortunate issue even of taking posts down quickly is unfortunately the rule which states that if something is on the internet it stays on the internet. The publisher always went free due to acting quickly and not creating it. I would like to see where it goes when both publisher and creator are the same entity, and they do nothing to prevent it.
Nobody in the Netherlands gives one flying fuck about American laws what GROK is doing violates many Dutch laws. Our parliament actually did it's job and wrote some stuff about revenge porn, deep fakes and artificial CP.
For civil liability, 230 really shouldn't apply; as you say, 230's shield is about avoiding vicarious liability for things other people post. This principle stretches further than you might expect in some ways but here Grok just is X (or xAI).
Nothing's set in stone much at all with how the law treats LLMs but an attempt to say that Grok is an independent entity sufficient to trigger 230 but incapable of being sued itself, I don't see that flying. On the other hand the big AI companies wield massive economic and political power, so I wouldn't be surprised to see them push for and get explicit liability carveouts that they claim are necessary for America to maintain its lead in innovation etc. etc., whether those come through legislation or court decisions.
When the far-right paints trans people as pedophiles, it's not an accident that also provides cover for pedophiles.
The age of consent between 16 and 18 is a relatively high born from progressive feminist wins. In the United States, the lowest AOC was 14 until the 1990s, and the AOC in the US ranged from _7 to 12_ for most of our existence.
To be clear, I'm in defense of a high age of consent. But it's something that had to be fought for, and it's not something that can be assumed to be safe in our culture (like the rejection of nazis and white supremacists, or valuing womens rights including voting and abortion).
Influential politicians like Tom Hofeller were advocates for pedophilia and nobody cares at all. Trump is still in power despite the Epstein controversy, Matt Gaetz still hasn't been punished for paying for sex with an underage girl in 2017. The Hitler apologia in the far-right spaces even explicitly acknowledge he was a pedophile. Etc.
In a different era, X would have been removed from Apple and Google's app stores for the CEO doing nazi salutes and the chatbot that promoting Hitler. But even now that X is a CSAM app, as of 3PM ET, I can still download X on both of their app stores. That would not have been normal just two years ago.
This has already been a culture war issue for awhile, there is a pro-pedophilia side, and this is just another victory for them.
Projection. It’s always projection…
Taking creepy pictures and asking a machine to create create creapy pictures for the world to see are not the same.
Why not charge the people who make my glasses cuz they help me see the CP? Why not charge computer monitor manufacturers? Why not charge the mine where they got the raw silicon?
Here you have a product which itself straight up produces child porn with like absolutely zero effort. Very different than some object which happens to be used, photograph materials
Nikon doesn't sell a 1-minute child porn machine, xAI apparently does.
Maybe you think child porn machines should be sold?
If it was a case where CSAM production becomes mainstream use case I would have agreed but it is not.
How hard is this? What are they doing now, and is it enough? Do we know how hard they are trying?
For argument's sake, what if they had truly zero safegaurd around it, you could type "generate child porn" and it would 100% of the time. Surely you'd agree they should prevent that case, and be held accountable if they never took action to prevent it.
Regulation, clear laws around this would help. Surely they could try go get some threshold of difficulty in place that is a requirement to adhere to preventing.
I'm not in CP so I don't try to make it generate such content but I'm very annoyed that all providers are trying to lecture me when I try to generate anything about public figures for example. Also, these preventive measures are not working well at all, yesterday I had one denying to generate infinite loop claiming its dangerous.
Just throw away this BS about safety and jail/fine whomever commits crime with these tools. Make tools tools again and hold people responsible for the stuff they do with these tools.
When do we cross the line of culpability with tool-assisted content? If I have a typo in my prompt and the result is illegal content, am I responsible for an honest mistake or should the tool have refused to generate illegal content in the first place?
Do we need to treat genAI like a handgun that is always loaded?
knowingly allowing it is not in good faith.
That's what section 230 says. The content in question here is not provided by "another information content provider", it is provided by X itself.
For example, if someone posted CSAM on HN and Dang deleted it, I think that it would be wrong to go after HN for hosting the content temporarily. But if HN hosted a service that actively facilitated, trivialized, and generated CSAM on behalf of users, with no or virtually no attempt to prevent that, then I think that mere deletion after the fact would be insufficient.
But again, you can just use "Grok is generating the content" to differentiate if that doesn't compel you.
Look what happens when you put in an image of money into Photoshop. They detect it and block it.
Who cares about Adobe? I'm talking about Grok. I can consistently say "I believe platforms should moderate content in accordance with Section 230" while also saying "And I think that the moderation of content with regards to CSAM, for major platforms with XYZ capabilities should be stricter".
The answer to "what about Adobe?" is then either that it falls into one of those two categories, in which case you have your answer, or it doesn't, in which case it isn't relevant to what I've said.
but to answer your point, no for two reasons:
1) you need to bring your own source material to create it. You can't press a button that says "make child porn"
2) its not a reasonable to expect that someone would be able to make CSAM in photoshop. However more importantly the user is the one hosting the software, not adobe.
Where is this button in Grok? You have to as the user explicitly write out a very obviously bad request. Nobody is going to accidentally get CSAM content without making a conscious choice about a prompt that's pretty clearly targeting it.
No, you need to train, take a lot of time and effort to do it. with grok you say "hey make a sexy version of [picture of this minor]" and it'll do it. that doesn't take traning, and its not a high bar to stopping people doing it.
The non-CSAM example is this, it's illegal, in the USA to make anything that looks like a US dollar bill. ie photocopiers have blocks on them to stop you making copies of it.
You can get round that as a private citizen but its still illegal. A company knowingly making a photocopier that allows you to photocopy dollar bills is in for a bad time.
I'm at a loss to explain it, given media's well known liberal bias.
I think it's time to revisit these discussions and in fact remove Section 230. X is claiming that the Grok CSAM is "user generated content" but why should X have any protection to begin with, be it a human user directly uploading it or using Grok to do this distribution publicly?
The section 230 discussion must return, IMHO. These platforms are out of control.
Removing Section 230 was a big discussion point for the current ruling party in the US, when they didn't have so much power. Now that they do have power, why has that discussion stopped? I'd be very interested in knowing what changed.
But beyond the legality or obvious immorality, this is a huge long-term mistake for X. 1 in 3 users of X are women - that fraction will get smaller and smaller. The total userbase will also get smaller and smaller, and the platform will become a degenerate hellhole like 4chan.
Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action. DogeDesigner regularly sub tweets Elon agreeing to his latest dumb take of the day, and even seems to have based his entire identity on Elon's doge obsession.
I can't imagine how terrible that self imposed delusion feels deep down for either of them.
A similar article[1] briefly made it to the HN front page the other day, for a few minutes before Elon's army of unpaid yes-men flag-nuked it out of existence.
The person(s) ultimately in charge of removing (or preventing the implementation of) Grok guardrails might find themselves being criminally indicted in multiple European countries once investigations have concluded.
They might just let this slide to not rock the boat, either out of fear and they will do nothing, or to buy time if they are actually divesting from the alliance with and economic dependence on the US
The asshole puckering is from how Trump has completely flipped the table, everything is hyper transactional now, and as we’ve seen military action against leaders personally is also on the table.
I’m saying I could see the EU let this slide now because it’s not worth it politically to regulate US companies for shit like this anymore. Whether that would be out of fear or out of trying to buy time to reorganize would probably end up in future getting the same kind of historical analysis that Chamberlain’s policy of appeasement to Germany gets nowadays
Suppose, if instead of an LLM, Grok was an X employee specifically employed to photoshop and post these photos as a service on request. Section 230 would obviously not immunize X for this!
https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
Generating a non-real child could be argued that it might not count. However thats not a given.
> The term “child pornography” is currently used in federal statutes and > is defined as any visual depiction of > sexually explicit conduct involving a > person less than 18 years old.
Is broad enough to cover anything obviously young.
but when it comes to "nude-ifing" a real image of a know minor, I strognly doubt you can use the defence its not a real child.
Therefore your knowingly generating and distributing CSAM, which is out of scope for section 230
What's "person" here? Usually, in law, "person" has a very specific meaning.
But
the law applies if its a depiction of a person who is real, So a sexualised hand draw drawing of a recognisable person, (who is a minor) is CSAM.
Which means if someone says to grok "hey make a sexy picture of this[post of a minor]" and it generates a depiction of that minor, its CSAM.
They have something like Section 230 in the E-Commerce Directive 2000/31/EC, Articles 12-15, updated in the Digital Service Act. The particular protections for hosts are different but it is the same general idea.
Grok is just another tool, and IMO it shouldn't have guard rails. The user is responsible for their prompts and what they create with it.
Only one of these is easily preventable with guardrails.
How is the world improved by an AI tool that will generate sexual deepfake images of children?
It's both. Very simple. You can't get around liability by forming a conspiracy [0].
Or do you think a Microsoft exec should go to jail every time someone uses it to write a death threat?
https://www.justice.gov/usao-ak/pr/federal-prosecutors-alask...
This is a dangerous product, the manufacturer _knows_ it is dangerous, and yet still they provide the service for use.
When Grok stated that Israel was committing genocide, it was temporarily suspended and fixed[0]. If you censor some things but not others, enabling the others becomes your choice. There is no eating the cookie and having it too - you either take a "common carrier" stance or censor, but also take responsibility for what you don't censor.
[0] https://www.france24.com/en/live-news/20250813-chatbot-grok-...
You left out "who controls the output of the tool", which makes it a strawman.
If you try, you quickly end up codifying absurdities like the 80%-finished-receiver rule in firearm regulation. See https://daytonatactical.com/how-to-finish-an-80-ar-15-lower-...
People who say "society should permit X, but only if it's difficult" have a view of the world incompatible with technological progress and usually not coherent at all.
The law is filled with these questions. "Well, how do you draw the line" was not a sufficient defense in Harris v. Forklift Systems.
The LLM itself is more akin to a gun available in a store in the "gun is a tool" argument (reasonable arguments on both side in my opinion); however, this situation more like a gun manufacturer creating a program to mass distribute free pistols to a masked crowd, with predictable consequences. I'd say the person running that program was either negligent or intentionally promoting havoc to the point where it should be investigated and regulated.
IMO, the fact that you would say this is further evidence of rape culture infecting the world. I assure you that people do care about this.
And friction and quality matters. When you make it easier to generate this content and make the content more convincing, the number of people who do this will go up by orders of magnitude. And when social media platforms make it trivial to share this content you've got a sea change in this kind of harassment.
Convincingly photoshopping someones face onto a nude body takes time, skills, effort, and access to resources.
Grok lowers the barrier to be less effort than it took for either you or I to write our comments.
It is now a social phenomenon where almost every public image of a woman or girl on the site is modified in this manner. Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
And there is safety in numbers. If one person photoshops a highschool classmate nude, they might find themself on a registry. For lack of knowing the magnitude, if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
Hold on to that spirit and I think you'll genuinely do well in the world that's coming next.
This site.
Mate, thats the point. I as a normal human being, who had never been on 4chan or the darker corners of reddit would have never seen or be able to make frankenporn. much less so make _convincing_ frankenporn.
> For lack of knowing the magnitude
Fuck that shit, if they didn't know the magnitude they wouldn't have spend ages making the photoshop to do it. You don't spend ages doing revenge, "because you didn't know the magnitude" You spend ages doing it because you want revenge
> if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
I mean we put people in prison for drink driving, lots of people do that in the states, same with drug dealing. Same with harassment, thats why restraining orders exist.
but
You are missing the point, Making and distributing CSAM is an illegal offence. Knowingly storing and transmitting it is an offence. Musk could stop it all now by re-training grok, or putting in some basic controls.
If any other person was doing this they would have been threatened with company ending action by now.
We mostly agree, so let me clarify.
Grok is being used to make very much revenge porn, including CSAM revenge porn, and people _are using X because it's the CSAM app_. I think this is all bad. We agree here.
"For lack of knowing the magnitude" is me stating that I do not know the number of people using X to generate CSAM. I don't know if it is a thousand, a million, a hundred million, etc. So, I used the word "myriad" instead of "thousands", "millions", etc.
I am arguing that this is worse because the scale is so much more. I am arguing against the argument equivocating this with photoshop.
> If any other person was doing this they would have been threatened with company ending action by now.
Yes, I agree. X is still available on both app stores. This means CSAM is just being made more and more normal. I think this is very bad.
I brought up Section 230 because it used to be that removal of Section 230 was an active discussion in the US, particularly for Twitter, pre-Elon, but seems to have fallen away.
With content generated by the platform, it certainly seems reasonable to understand how Section 230 applies, if it all, and I in particular think that Section 230 protections should probably be removed for X in particular.
You are correct; I read your earlier post as "did we forget our already established principle"? I admit I'm a bit tilted by X doing this. In my defense, there are people making the "blame the user, not the tool" argument here though, which is the core idea of section 230
I believe he thinks the same applies to Grok or whatever is done on the platform. The fact that "@grok do xyz" makes it instanteous doesn't mean you should do it.
Anyways, super cool that anyone speaking out already has their SSN in his DB.
> X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).
Which is moderating/censoring.
The tool (Grok) will not be updated to limit it - that's all. Why? I have no idea, but it seems lately that all these AI tools have more freedom than us humans.
If you want to be an actress and you are 14 years old, you now have to worry about tools that make porn of you.
If you are an ordinary woman that wants to share photos with your friends on instagram, you now have to worry about people making porn of you!
The one above is not my opinion (although I partially agree with it, and now you can downvote this one :D ). To be honest, I don't care at all about X nor about an almost trillionaire.
It was full of bots before, now it's full of "AI agents". It's quite hard sometimes to navigate through that ocean of spam, fake news, etc.
Grok makes it easier, but it's still ugly and annoying to read 90-95% always the same posts.
Weird. Why do people get in trouble for using the word "cis" on twitter?
It’s against the TOS to post a picture of your own boobs for example.
2) X still has an ethical and probably legal obligation to remove these images from their platform, even if they are somehow found not to be responsible for generating them, even though they generated them.
That's interesting - do you have a link for this? I'd be curious to know more of the section's details.
“The information must be "provided by another information content provider", i.e., the defendant must not be the "information content provider" of the harmful information at issue”
If grok is generating these images, I am interpreting this as Twitter could be becoming an information content provider. I couldn’t find any relevant rulings but I doubt any exist since services like Grok are relatively new.
The very first AI code generators had this issue that user could make illegal content by making specific requests. A lot of people, me including, saw this as a problem, and there were a few copyright lawsuits arguing this. The courts however did not seem to be very sympathetic to this argument, putting the blame on the user rather than the platform.
Here is hoping that Grok forces regulations to decide on this subject once and for all.
Also, this always existed in one form or another. Draw, photoshop, imagine, discuss imaginary intercourse with popular person online or irl
It's not worthy of intervention because it will happen anyway and it doesn't fundamentally change much
It's a fictional creation. Nobody is "taking her clothes off", a bot is fabricating a naked woman and tacking her likeness (ie. face) on to it. If anything, I could see how this could benefit women as they can now start to reasonably claim that any actual leaked nudes are instead worthless AI slop.
I don't think I would care if someone did this to me. Put "me" in the most depraved crap you can think of, I don't care. It's not me. I suspect most men feel similarly.
What's the big deal?
A woman being damaged by nudes is basically a white knight, misogynist viewpoint that proclaims a woman's value is in her chastity / modesty so by posting a manufactured nude of her you have thereby degraded her value and owe her damages.
It feels odd for them to be advertising this belief though. These are surely a lot of the same people trying to devalue virginity, glorifying public sex positivity, condemning "slut shaming", etc.
Shall we ban prediction markets?
It's not a personal tool that the company has no control over. It's a service they are actively providing and administering.
But when it’s used to create CSAM, then it’s suddenly not just a tool.
You _cannot_ stop these tools from generating this kind of stuff. Prompt guards only get you so far. Self-hosted versions don’t have them. The human writing the prompt is at fault. Just like it’s not Adobe’s responsibility if some sick idiot puts bikinis on a child in Photoshop.
[0] > Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.
Yes, but combined with "omg AI" (which happened elsewhere; for instance, see the hype over OpenAI Sora, which is clearly useless except as a toy), so extra-hype-y.
1. Hypocrisy (people expressing a different opinion on this subject than they usually would because they hate Musk)
vs.
2. Selection bias (article title attracts a higher percentage of people who were already on the more regulation, less freedom side of the debate)
vs.
3. Self-censorship (people on the "more freedom, less regulation" side of the debate being silent or not voting on comments because in this case defending their principles would benefit someone they hate)
There might be other factors I haven't considered as well.
Also I think a lot of people simply think models which are published openly shouldn't be held to the same legal standards as proprietary models.
The real question is how can the pro-Musk guys still find a way to side with him on that. My leading theory is that they're actually pro-pedophilia.
I can argue for access to say photoshop like tools, and say folks shouldn't post revenge / fake porn ...
If you are providing a tool for people, YES you are responsible to some degree.
Think of it this way. I sell racecars. I'm not responsible if someone buys my racecar and then drinks and drives and dies. Now, I run an entertainment venue where you can ride along in racecars. One of my employees is drunk, and someon dies. Now I am responsible.
In what way?
But I think, most people would say "uh, yeah, the business needs to do something or implement some policy".
Another example: selling guns versus running a shooting range. If you're running a shooting range then yeah, I think there's an expectation you make it safe. You put up walls, you have security, etc. You try your best to migrate the bad shit.
[1]: https://www.congress.gov/bill/119th-congress/senate-bill/146
I'd also argue commercialization affects this - X is marketing this as a product and making money off subscriptions, whereas I generally think of an open model as something you run locally for free. There's a big difference between "Porn Producer" and "Photoshop"
I think it is good that you can install any apk on an android device. I also think it is good that the primary installation mechanism that most people use has systems to try to prevent malware from getting installed.
This sort of approach means that people who really need unbounded access and are willing to go through some extra friction can access these things. It makes it impossible for a megacorp to have complete control over a computing ecosystem. But it also reduces abuse since most people prefer to use the low-friction approach.
They are able to change how Grok is prompted to deny certain inputs, or to say certain things. They decided to do so to praise Musk and Hitler. That was intentional.
They decided not to do so to prevent it from generating CSAM. X offering CSAM is intentional.
Yes, under the TOS, what grok is doing is not the "fault" of grok(the reason is the causal factor of the post[enabled by 2 humans: the poster and the prompter]; the human intent is what initiates the generated post, not the bot; just like a gun is shot by a human, not by the strong winds). You could argue it's the fault of the "prompter", but we're going to circle back to the cat & mouse censorship issue. And no, I don't want a less censored grok version that's unable to "bikini a NAS"(which is what I've been fortunate to witness) just because "new internet users" don't understand what the Internet is.(Yes, I know you can obviously fine-tune the model to allow funny generations and deny explicit/spicy generations)
If X would implement what the so-called "moralists" want, it will just turn into Facebook.
And for the "protect the children" folks, it's really disappointing how we're always coming back to this bullsh*t excuse every time a moral issue arises. Blocking grok is a fix both for the person who doesn't want to get edited AND the user who doesn't want to see grok replies(in case the posts don't get the NSFW tag in time).
Ironically, a decent amount of people who want to censor grok are bluesky users, where "lolicorn" and similar dubious degenerate content is being posted non-stop AS HUMAN-MADE content. Or what, just because it's an AI it's suddenly a problem? The fact that you can "strip" someone by tweeting a bot?
And lastly, sex sells. If people haven't figured out that "bikinis", "boobs", and everything related to sex will be what wins the AI/AGI/etc. race (it actually happens for ANY industry), then it's their problem. Dystopian? Sure, but it's not an issue you can win with moral arguments like "don't strip me". You will get stripped down if it created 1M impressions and drives engagement. You will not convince Musk(or any person who makes such a decision) to stop grok from "stripping you", because the alternative is that other non-grok/xAI/etc. entities/people will make the content, drive the engagement, make the money.
The fact of the matter is they do have a policy and they have removed it, suspended accounts and perhaps even taken it further. As would be the case on other platforms.
As far as I understand there is no nudity generated by grok.
Should public gpt models be prevented from generating detestable things, yes I can see the case for that.
I won't argue there is a line between acceptable and unacceptable, but please remember people perv over less (Rule 34). Are bikinis now taboo attire? What next, ankles, elbows, the entire human body?(Just like the Taliban). (Edit: I'm mentioning this paragraph for my below point.)
GPT's are not clever enough to make the distinction by the way either, so there's an unrealistic technical challenge here.
I suspect the this saga blowing out of proportion is purely "eLoN BAd".
> As far as I understand there is no nudity generated by grok.
There is nudity, and more importantly there is CSAM material being generated. reference: https://www.reddit.com/r/grok/comments/1pijcgq/unlocking_gro...
> Are bikinis now taboo attire?
generating sexualised pictures of kids is verboten. Thats epstien level of illegality. There is no legitiamte need for the public to hold, make or transmit sexualised images of children.
Anyone arguing otherwise has a lot of questions to answer
That is a different grok to the one publishing images and discussed in the article. Your link clearly states they are being moderated in the comments and all comments are discussing adults only. The links comments also imply that these folks are jailbreaking nearly, because of guardrails that exist too.
As I say read what I said, please don't put words in my mouth. The GPT models wouldn't know what is sexualised. I said there is a line at some point. Non-sexualized bikinis are sold everywhere, do you not use the internet to buy clothes?
Your immediate dismissive reaction indicates you are not giving what I'm saying any thought. This is what puritanical thought often looks like. The discourse is so poisoned people can't stop, look at the facts and think rationally.
I don't think there is much emotion in said post. I am making specific assertions.
to your point:
> Non-sexualized bikinis are sold everywhere
Correct! the key logical modifier is Non sexual. Also you'll note that a lot of clothing companies do not show images of children in swimwear. Partly that's down to what I imagine you would term puritanism, but also legal counsel. The definition of a CSAM is loose enough (in some jurisdictions) to cover swimwear, depending on context. That context is challenging. A parent looking for clothes that will fit/suit their child is clearly not sexualised (corner cases exist, as I said context) Someone else who is using if for sexual purposes is not.
and because like GPL3 CSAM is infectious, the tariff for both company and end user is rather high for making, storing, transmitting and downloading those images. If someone is convicted of collecting those images, and using them for a sexual purpose, then images that were created that were not-CSAM suddenly become CSAM, and legally toxic to posses. (context does come in here.)
> Your link clearly states they are being moderated in the comments
Which tells us that there is a lot of work on guardrails right? its a choice by xai to allow users to do this. (mainly the app is hamstrung so that you have to pay for the spicy mode.) Whether its done by an ML model or not is irrelevant. Knowingly allowing CSAM generation and transmission is illegal. if you or I were to host an ML model that allows user to do the same thing, we would be in jail. There is a reason why other companies are not doing this.
The law must be applied equally, regardless of wealth, or power. I think that is my main objection to all of this. its clearly CSAM, and anyone other than musk doing this would have been censured by now. All of this justification is because of who it is doing this, rather than what is being done. We can bike shed all we want about is it actually really CSAM, which negates the entire point of this, which is its clearly breaking the law.
> The GPT models wouldn't know what is sexualised.
ML Classification is really rather good now. Instagram's unsupervised categorisation model is really rather effective at working out context of an image or video (ie differentiation of clothes, and context of those clothes.)
> please don't put words in my mouth
I have not done this, I am asserting that the bar for justifying this kind of content, which is clearly illegal and easily prevented (ie a picture of a minor and "generate an image of her in sexy clothes") is very high.
Now you could argue that I'm implying that you have something to hide. I am actually curious as to your motives for justifying the knowing creation of sexualised images of minors. You've made a weak argument that there are legitimate purposes. You then argue that its a slippery slope.
Is your fear that this brings justifies an age gated internet? censorship? What is the price that you think is worth paying?
I said I don't understand the fuss because there are guardrails, action taken and technical limitations.
THAT is my motive. The end of story. I do not need to parrot outrage because everyone else is, "you're either with us or against us" bullshit. I'm here for a rational discussion.
Again read what I've said. Technical limitations. You wrote that long ass explanation interspersed with ambiguities like consulting lawyers in borderline cases and then you expect an LLM to handle this.
Yes ML classification is good now but not foolproof. Hence we go back to the first point, processes to deal with this when x's existing guardrails fail, as x.com has done, delete, suspend, report.
My fear (only because you mention it, I didn't mention it above, I only said I don't get the fuss above) it seems should be that people are losing touch in this grok thing, their arguments are no longer grounded in truth or rational thought, almost a rabid witch hunt.
"Hey grok make a sexy version of [obvious minor]" is not something that is hard to stop. try doing that query with meta, gemini, or sora, they manage it reliably well.
There are not technical impediments to stopping this, its a choice.
I'd bet if you put that prompt into grok it'd be blocked judging by that Reddit link you sent. These folks are jailbreaking nearly asking to modify using neutral terms like clothing and images that grok doesn't have the skill to judge.
Every feature is lawyered up. Thats what general counsel does. Every feature I worked on at a FAANG had some level of legal compliance gate on it, because mistakes are costly.
For the team that launched the chatbots, loads of time went into figuring out what stupid shit users could make it do, and blocking it. Its not like all of that effort stopped. When people started finding new ways to do naughty stuff, that had to be blocked as well. Because other wise the whole feature had to be pulled to stop advertisers from fleeing, or worse FCC/class action.
> These folks are jailbreaking nearly asking to modify using neutral terms like clothing
CORRECT! people are putting effort into jailbreaking the app. where as on x grok they don't need to do any of that. Which is my point, its a product choice.
None of this is "hard legal problems" or in fact unpredictable. They are/have done a ton of work to stop that (again mainly because they want people to pay for "spicy mode")
You can't have AI-generated CSAM, as you're not sexually abusing anyone if it's AI-generated. It's better to have AI-generated CP instead of real CSAM because no child would be physically harmed. No one is lying that the photos are real, either.
And it's not like you can't generate these pics on free local models, anyway. In this case I don't see an issue with Twitter that should involve lawyers, even though Twitter is pure garbage otherwise.
As to whether Twitter should use moderation or not, it's up to them. I wouldn't use a forum where there are irrelevant spam posts.
Seems like a toy feature.
This is actually separate to hn's politics-aversion, though I suspect there's a lot of crossover. Any post which criticised Musk has tended to get rapidly flagged for at least the last decade.
jshier•1d ago