If you build a system explicitly designed to have no content boundaries, and it produces CSAM, that's not a failure of execution - that's the system working as designed. You don't get credit for noble intentions when the outcome was entirely foreseeable.
Deciding to place no limits on what an AI will generate is itself a value judgment. It's choosing to enable every possible use, including the worst ones. That's not principled neutrality; it's moral abdication dressed up as libertarianism.
When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.
Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).
Are those goals noble? This is the same guy who also said "with AI we are summoning the demon" and whose self-justification for getting a trillion dollar Tesla bonus deal involved the phrase "robot army"?
CaaS
Fortunately this narration did not catch traction.
The knife maker will be in hot water if you ask them for a knife and you're very specific about how you'll break the law with it, and they just give it to you and do nothing else about it (equivalent to the prompt telling the LLM exactly the illegal thing you want).
Even more if the knife they made is illegal, an automatic knife or a dagger (equivalent to the model containing the necessary information to create CSAM).
I'd assume they were just blindsided by the response; they're likely in real danger of getting either DNS-delisted or outright banned in several jurisdictions.
It's just been restricted to paying customers, and that decision could be driven as much by cost as by outrage.
Edit: may also be linked to people making deepfakes of Renee Good, the woman murdered by US authorities in Minneapolis.
Bloody hell, what the hell is wrong with people?
Edit: I'm not talking about deepfakes of real people
2) Inability to differentiate between real and fake - less theoretical.
Plus, we as a species have a biological imperative to protect our offspring, but apparently an immense capacity to ignore violence committed against others.
This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.
> This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.
I don't think he's talking about "media ecosystems" but rather enforcement. If fake CSAM proliferates (that can be confused for the real thing), it would create problems for investigating real cases. Investigative resources would be wasted investigating fake CSAM, and people may not report real CSAM because they falsely assume it's fake.
So it probably makes sense to make fake CSAM illegal, not because of the direct harm it does, but for the same reasons its illegal to lie to a police officer or make a false crime report.
EDIT: op seems to have clarified in another reply that they are talking about fully generated computer images but this is out of context, the outcry here is that Grok generated fake images of actual people to start with!
Huh? People have been editing images of real women to depict them in bikinis (and that's the least offensive).
Leviticus 20:14 (KJV)
“And if a man take a wife and her mother, it is wickedness: they shall be burnt with fire, both he and they; that there be no wickedness among you.”
Leviticus 21:9 (KJV)
“And the daughter of any priest, if she profane herself by playing the whore, she profaneth her father: she shall be burnt with fire.”
Considering how many people want to murder people (or justify murdering people) simply for being attracted to children, are you categorically sure about that?
The only reason I personally can't condone photorealistic AI images is because they're indistinguishable from photographs.
And in this case, it's required the exploitation of another human being (their photograph, or a photograph of them) in order to undress/manipulate the image.
There is also a greater debate about giving people with harmful deviances a “victimless” outlet versus normalising their deviance. Sure there are no children harmed in making the images, but what if they generate images of child actors, or normalise conversation about child sexual abuse?
Publishing is just the first part. The other part is reactions. Most platforms let you disable them so your viewers don't see the disgusting things people say about you right there along your content. Yet many people who publish themselves decide to leave them on. Because the vile comments actually help them exploit their viewers. There's a value to being told to kill yourself in a comment on your post.
If the capability to let users generate fake porn in the comments was left to creators. Many of them would leave that option on. For the same reason they leave the comments on. Ben Shapiro could benefit a lot if some terrible person commented on his video with deepfake or him sucking someone off. Both because of the outrage and because his viewer base is more homophobic and homophoby correlates with homosexual arousal.
At this stage it might just be easier to seek out Satan directly, you'll probably get a better deal for your soul.
Have an interesting stay.
People are being hurt by this, because "just pixels on a screen and bytes on a disk" can constitute harm due to the social function that information serves.
It's like calling hurling insults at someone as "just words" because no physical violence has occurred yet. The words themselves absolutely can be harm because of the effect they have, and also create an environment that leads to further, physical violence. Anyone who has experienced even mild bullying can attest to that.
Furthermore women and girls are often subject to online harassment and humiliation. This is of course part of that -- we aren't talking about fictional images here, we are talking about photos of real people, many who are children, being manipulated to shame, humiliate, and harass them sexually, targetted at women and girls overwhelmingly.
Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.
> we aren't talking about fictional images here, we are talking about photos of real people, many who are children,
is not compatible with what GP actually said
> it's clearly wrong when it concerns actual children
> No living creature was actually hurt when producing these images and cannot be hurt by them.
making these overly dramatic character attacks seem mostly silly
> you completely lack capacity for empathy and you should do some serious self-reflection. This is just a really vile and amoral view to hold.
> Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.
And everyone clapped.
GP edited their post to add that after everyone pointed out that that's what the entire thread is actually about and GP realised how disgusting they looked. Keep up.
Everyone knows photos can be easily faked. The alarmist response serves no purpose. AI itself can be tasked to make and publish fake photos. Will you point pitchforks at the generators of the generators of the generators?
Fake content has a momentary fizz followed by a sharp drop-off and demotion to yesterday's sloppy meme. Fading to nothing more than a cartoon you don't like. Let's not, I hope, go after "cartoons" or their publishers.
However, I think it's clear that Pandora's box is now wide open and that you cannot close it. Sure, you can turn off that Grok integration, but the AI image generation capabilities are now widely available for basically anyone to use.
I wonder whether it'd be better to just "accept and live with it". I agree that this can cause a lot of harm, but I don't see a way where this can be outlawed and prosecuted in such a way that there's a net benefit for society. In the EU many have been battling proposals like Chat Control for the past decades not because they want to protect sex offenders, but because backdooring societies privacy on a grand scale is likely far more detrimental than the impact of sex offenders. (And here we aren't even talking about "real" CSAM content.)
I don't think a world where every female public figure gets nonconsensual porn of themselves shared _publicly_ is better.
Private is a separate matter, but only if it stays truly private.
Big tech's approach of move fast, break things, and gain a sh** load of money and influence has cost the world so much over the past two decades. So much so that the post-WWII rules-based international order is under threat. We're on the verge of sliding back towards a world where might make right and the powerful gets to kill, beat, steal, and sexually abuse whoever they want whenever they want. Worse, with the help of technology, they get to entertain the masses by turning those horrific acts into social media content.
It's largely due to the acts of big tech that we got into this mess. But instead learning from this biggest mistake of our generation and taking proactive steps to prevent further harm, you propose that we all suck it up and accept whatever our tech billionaire overlords want to further inflict on this world? WTAF.
> proposals like Chat Control
Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?
I feel like we are already past the point where influential people have to play by the same rules as everyone else. I dislike this as much as 99% of the population, but I don't realistically see a remedy given how our governments (EU) are operating.
> Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?
I am not comparing these two things; I mentioned Chat Control as tackling CSAM is one of its main selling points. These forging tools are in the open and banning access and use of them is practically impossible. You could force platforms into setting up filters for public content, but this won't stop the nudes from being shared privately and likely still being accessible on the web _somewhere_. Just look at the bs on tiktok that's been publicly accessible and growing for years now...
IMHO public content filtering won't help much and the following steps will likely involve tapping into people's private content and messages. And this is where I draw the line.
The level of discourse on twitter is pretty terrible already but at that point what’s the platform even selling… and who is the platform for?
Just from a product standpoint you got problems.
I don’t think we can avoid a world where people can generate CSAM easily, so we have to separate the discussion between being able to do that privately and grok being able to do it.
It makes sense to me that we don’t want widely used websites to contain images of CSAM that you can’t easily avoid, it’s simply repulsive to almost everyone and that’s almost certainly a human instinct, I don’t think it needs to be much more complicated than that.
In terms of generating CSAM privately or even sharing it with other people, I think this is a much more interesting discussion. I think at this point it is an open question on whether it is harmful. Could this replace the abuse that is happening to create some of the real content? Does the escalation argument hold water - will people be more likely to sexually assault children due to access to this material? I don’t think we know enough about pedophilia to answer these questions but given that I don’t think there is a way to stop generating this content in 2026 we really need to answer these questions before we decide to simply incarcerate everyone doing it.
So with that in mind, why should we assume that the unrestricted proliferation of fake child porn would not produce significant harm downstream to society? Is the assumption that unlimited fake child porn would satiate the kind of people who now seek out real child porn?
I would strongly dispute that as the amount of energy required by and pollution produced by AI systems does cause a lot of harm to living creatures and environments.
Is it because it was pre-trained with real images? (which would be highly illegal and immoral, but I wonder if Twitter has a data-curation team somewhere)
Maybe some kind of distillation technique such as "generate normal porn -> decrease body size -> generate childish face and replace the original image's face with it"? Which would prove there's an intent to explicitly allow generation of this kind
Is it an emergent behaviour?
Why are there no better safeguards?
A good analogy is that human artists also often train by painting or drawing a nude model.
Then you are not talking about the article, so what was the point of your comment?
Facebook’s practices would have gotten any other dev banned from all stores long ago.
Meanwhile any other devs are under a different microscope / standard.
Reddit as well
"Random Braveheart quote"
Unless you use the grok app...
"Random Matrix quote"
I'll take those downvotes and see myself out.
westpfelia•14h ago
nickmyersdt•13h ago
Non-consensual intimate imagery harms real people.
CSAM normalizes and facilitates abuse of real children.
Grok, and everyone involved in it or similar endeavours, facilitate abuse.
literalAardvark•13h ago
In a way, leaving it open as a honeypot is the best action.
janice1999•12h ago
Hamuko•12h ago
https://bsky.app/profile/caseynewton.bsky.social/post/3mbwqh...
Hamuko•12h ago