Given that the UK is trying to block 4chan via the scenic route, this, in my opinion, just gives the government incentive to block any website in the future and throw it back in the faces of the population.
"We're blocking $WEBSITE for $THINK_OF_THE_CHILDREN. You don't get to complain because you supported blocking Twitter, so..."
That said, I honestly believe their hatred of Elon Musk is fuelling that, rather than Grok - which very obviously needs to be fixed - itself.
It is not being misused. It is being used as intended.
Though extra bonus irony points for some of the civil war comments being related to how Musk thinks the UK government is letting in migrants who want to do exactly the sort of thing Musk's own image generator is creating pictures of.
labrador•17h ago
This is a very interesting AI problem. Grok was trained on X content which contains a lot of porn. Other image generation models aren't trained on porn so they don't know how to produce it. It appears to be very difficult to stop Grok from making porn since it has been trained on it. There's always a prompt that can produce it. Is the ony workable solution to train a new version Grok 5 without porn?
watwut•17h ago
It is not like X would had a track record of caring about this sort of thing.
labrador•17h ago
ben_w•17h ago
watwut•16h ago
However, nonconsensual porn IS the desired outcome by all we know. Nonconsensual porn other countries cant stop because Musk is powerful enough is even better outcome for Musk.
bdbdjfjrbrbf•16h ago
Not true. Google’s models are excellent at making adult content, they just have aggressive pre and post filtering, but it’s not perfect and glimpses of its dirty mind slip through the cracks.
(I’m not sure about OpenAI, its filtering is much more aggressive so it’s harder to probe. I’ve seen it make sexualized content, but I haven’t seen anything that it would necessarily have learned from porn.)
Grok’s lack of anything resembling effective filtering is an intentional product choice, not a training data limitation. Not surprising, coming from “pedo guy” with a breeding fetish and an obsession with catgirls. What horrors we might find if we searched his drives…
labrador•15h ago
Google invests in the safety of its training data from the outset. This involves efforts to filter out problematic content, such as violent, offensive, or sexually explicit material, before or during the model training phase. The aim is to ensure the models are trained on appropriate data consistent with Google's AI Principles and policies. The company has a zero-tolerance policy for illegal content, such as child sexual abuse material (CSAM), and works to ensure such material is not included in the datasets.