Again, I’m not judging about content moderation, but this is an extremely weak initiative.
Someone reports something for Special Pleading X, and you (the operator) have to ~instantly take down the thing, by law. There is never an equally efficient mechanism to push back against abuses -- there can't be, because it exposes the operator to legal risk in doing so. So you effectively have a one-sided mechanism for removal of unwanted content.
Maybe this is fine for "revenge porn", but even ignoring the slippery slope argument (which is real -- we already have these kinds of rules for copyrighted content!) it's not so easy to cleanly define "revenge porn".
The next step would be for the government to demand direct access to these tools. Then the government would be able to carry out holocausts against any ethnic group, only 10 times more effectively and inevitably than Hitler did.
As far as I understand, it precisely mandating to monitor EVERYONE.
They are not talking about removing a specific image from the platform based on its hash or something. They are talking about actions that involve automated analysis of all content on the platform for patterns arbitrarily specified by the government.
The technologies discussed differ from totalitarian surveillance by simply toggling a single flag on the platform, and are indistinguishable from such surveillance for the user.
But most people understand the word "surveillance" to mean more involved information collection than just deleting content that matches certain criteria.
In contrast to the proposed one, which should be able to classify all content on the platform at any arbitrary moment in time according to a post-factum specified arbitrary filter. Literally a mechanism of totalitarian surveillance.
You've got this the wrong way around. These are social media sites.
People are publicly publishing revenge porn, and the government has told sites that if they are requested to take down revenge porn then they have to.
They don't have to monitor, because they are being told of it's existence.
CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas whether an image was shared nonconsensually seems like it'd often require context that is not in the image itself, possibly contacting the parties involved.
Everything can be detected "with some false positives". If you're happy with "with some false positives", why would you need any context?
The key phrase is "non-consensual intimate image" commonly known as "revenge porn". It seems this includes fakes as well.
Edit: full text of draft legislation https://www.gov.uk/government/collections/crime-and-policing... ; still very much in the process of being amended.
I note that publishing NCII is already an offence in Scotland, although it doesn't have this kind of liability for platforms. Primarily used against ex-partners publishing real or fake revenge images.
Bare in mind, this would have been used to stop the Epstein images of the former Prince Andrew from being viewed [1].
> Platforms that do not do so would potentially face fines of 10 percent of "qualifying worldwide income" or have their services blocked in the UK.
Why on earth would it be 10% of their world wide income and not their UK-based income? These politicians really think they have more power than they really do.
> The amendment follows outrage over the Elon Musk-owned chatbot Grok's willingness to generate nude or sexualized images of people, mainly women and girls, which forced a climbdown earlier this year.
The AI didn't just randomly generate NSFW content, it did it at the request of the user. Remember, there was no interest in removing the CP content from Twitter prior to Musk buying it, and then they all moved to Mastodon / BlueSky where they now share that content.
> The government said: "Plans are currently being considered by Ofcom for these kinds of images to be treated with the same severity as child sexual abuse and terrorism content, digitally marking them so that any time someone tries to repost them, they will be automatically taken down."
Ofcom simply doesn't have this kind of power. 4chan are showing as much [2]. This is simply massive overreach by the UK government, and I would advise tech giants to stop complying.
Regardless of whether the actions of the British state are correct, I do not think it is a good position that a foreign tech company is more powerful than the British state.
This is basically the same argument for the US forced divestiture of TikTok.
Britain is a weak state. There will always be foreign companies more powerful than it is. The only way to change that would be for the British state to become extremely powerful.
Does it bother you if a British company is more powerful than the state of Turkmenistan?
If this is the end goal, then they should do the same thing China does. Make back doors mandatory on all devices and ban any sensitive foreign platforms at the network level. If anyone is using VPNs, Tor, or whatever the UK police can flag those individuals and investigate what they are doing. At minimum, they can get ad revenue for Google, X, Meta, etc close to $0 in the UK which will dis-incentivize those platforms from having users there.
There is also a future here where the UK will not be able to monitor or see what their users are doing. SpaceX is already breaking foreign sovereignty with Starlink usage in Iran. If the UK or the rest of EU fails to really crack down at the scale China did, they may completely lose control of what is distributed within their borders. A combination of satellites and mesh networks could be much harder to monitor than the current telecom infrastructure.
The current approach is going to get the UK pressured at the nation state level by the US. In that case the UK isn't answering to some foreign tech company but whatever party is in power in the US at the time.
This seems like an extreme over reaction; why can't the US platforms just stop profiting from revenge porn?
(Bear, although your typo is awkwardly relevant...)
Would redacted images, and those that do not identify the victim, actually count?
> Why on earth would it be 10% of their world wide income and not their UK-based income? These politicians really think they have more power than they really do.
I mean, when it comes down to a fine or blocking access altogether, surely they can ask for whatever they want? They could've made it "one bajillion dollars" if they wanted. Actually collecting the fine is a whole other matter.
> Mastodon / BlueSky where they now share that content.
I regularly check Bluesky and occasionally check Mastodon, and I've never seen even 'tame' porn on either. I have absolutely seen porn on X, though.
Laws are reactive. When abuses of the system happen lawmakers need to find ways to minimize the damage. This is one of the reasons that Google used to follow the "do not do evil" doctrine. It was a smart way to minimize regulation. The new big tech has thrown any aparence of morality thru the windows and that creates a strong need to regulate their actions.
You’d have to essentially police all VPN use beyond China levels to get the worst offenders of this.
(keyword appears to be Recognition Acts: https://www.uniformlaws.org/committees/community-home?Commun... )
Can we also get a legal definition of "social media"? Is that really just as simple as "services which allow multi-directional communication"? Hate to break it to them, but the internet proper is, itself, a service which allows multi-directional communication. No matter how many walled gardens are created, the 1s and 0s will continue to flow unimpeded.
superkuh•1h ago
iMark•1h ago
actionfromafar•1h ago
Though I wonder if not existing frameworks around slander and libel could be made to address the brave new world of AI augmented abuse.
pjc50•37m ago
vr46•1h ago
Manuel_D•1h ago
Fictional content is also covered by this law. How do we determine what fictional content counts as an intimate image of a real person? What if the creator of an AI image adds a birthmark that the real life subject doesn't have, is that sufficient differentiation to no longer count as an intimate image of a real person? What if they change the subject's eye color, too?
SpicyLemonZest•22m ago
SamoyedFurFluff•1h ago
noobermin•1h ago
[1] https://www.theguardian.com/society/2026/feb/18/tech-firms-m...
femiagbabiaka•1h ago
Ray20•1h ago
femiagbabiaka•40m ago
Quarrelsome•37m ago