Oh wait...self-harm just includes suicide. I guess it is still fine to comvince people to destroy their lives as long as it has a revenue curve behind it.
It is a long con, but the expected value of self-harm is still positive and it is legally protected speech in the UK.
You can't tell someone to kill thelselves but you show alcohol ads to an alcoholic or encourage someone to modify their staircase to increase risk of fall death.
My point is this law is a stupidly narrow definition of self-harm.
Anyhoo, UK has been weird lately, but as most things, new equilibrium will be reached.
To see how silly it is, just turn around: if they allow self-harm videos, before you know it they’ll be mandating that ALL videos be about self-harm. Seem reasonable?
A soldier killed himself with a rifle
A local newspaper was asked to remove the page because "it contains information harmful to children", namely guides how to kill yourself. Because there is such a law. They complied
The slippery slope argument is warranted because it's true: the law was about pornography, then it was extended to include non-pornographic content, and we can be sure more stuff will be banned in the future, as it always happens with these laws.
The typical algorithmic implementation would ban this HN discussion itself, for containing the string "self-harm" and various other keywords on the page. That's often how it ends up, for anyone who's been paying attention. Legitimate support websites are censored, for discussing the subjects the attempt to support. Public health experts get misclassified as disinformation—they use the same set of keywords, just in a different order. Inexpensive ML can't tell them apart.
Important news could be automatically suppressed before anyone would even realize it's authentic news. How could one discuss on an algorithmically "safe" platform, for instance, the allegation that the President of the United States paid the pedophile Jeffrey Epstein to rape an underage child, who later killed herself? That's four or five "high-priority" safety filters right there.
Look at how Microsoft GitHub consistently deletes innocent projects without warning, like SymPy, over LLM hallucinations. That moderation style's a direct consequence of the large financial costs of copyright lawsuits. If you introduce similar financial/legal risks in other areas, you should expect similar results.
very much so, large companies are not really held accountable for their user's actions, and thats pretty much by design, for example:
If you hold a party every saturday, where the people that come along abuse residents in the streets and cause general damage, after the 5th or 6th time, and at about the point where the patrons of your parties are prosecuted, you will face legal penalties for letting it happen again. even if its different people at your party. (under a whole bunch of laws, ranging from asbos to breach of the peace to criminal damage, anti-rave laws, all sort.)
If you do that online, as long as you comply with the bare minimum(ie handing over logs), you're free from most legal pain (unless its CSAM, copyrighted or "terrorist" material)
I get your point, but thats where the we get into not a problem of the principle, but the execution. As its OFCOM who are doing the implementing, and they really don't have the expertise or leadership to make "good" guidance, we're going to end up with shit.
Poorly-thought out, asymmetric incentives: strongly disincentivizes helping people not-die, while failing at disincentivizing creating an environment for easy access to hard drugs.
That's the problem with laws like these, obviously a company is going to air on the side of over censoring if the cost of making a mistake is an unknown. If your moderating. Law of large numbers being what it is, eventually if your platform is large enough something will slip through. And no one knows what the punishment will be or where the line even is.
Does this include smoking, excessive eating, eating sweets, etc? What about listening to sad music?
> This government is determined to keep people safe online. Vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country.
"Vile" is very emotionally charged, who decides it? Will it be the next government that gets to decide it? Bare in mind, the (recently) ex-Deputy Prime Minister of the current government called her opposition "scum" [1], an extremely negative word.
Or "voting against your own interests"? I can't deny that some voting patterns amount to self harm.
And why stop at voting? The government could also be responsible for preventing 'harmful' thoughts. Police in the UK are regularly deployed for "non-crime hate incidents", so that they can tell people that they haven't committed a crime, but they will make a note of it in a secret database that affects your employability.
The quote about 'This government is determined to keep people safe online,' is a 'we're good people' statement for the media - remember, this is a press release.
Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
Definitions slip over time. Violence, abuse, sexual assault, etc, were previously all physical acts. Then they became mental acts, and now just the perception is devastating.
> Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
In the moment. But a future politician can reasonably interpret the same idea differently.
Will this result in more spread of ridiculous euphemisms like unalive? Probably. Will this result in us being able to get people banned from social media for telling other people to kill themselves? Probably only with very intermittent enforcement.
Wars and death will end simply because people won't be able to discuss them.
Advertisers don't want their ads to appear next to certain keywords, platforms use content detection to match those criteria, so creators are monetarily incentivized to avoid them. Capitalism!
- Cute winter boots (ICE officers)
- Gardening (420)
- pdf/pdf file (pedophile)
- :watermelon-emoji: (Palestine)
- neurospicy (neurodivergent)
- bird website (X)
- LA Music festival tour (protests)
Not sure if I see more of the 'algospeak' because the problem is real, because I've interacted with algospeak content before and it's just giving me more of it, or if creators don't really need to do it anymore but just still do.
Another common element of both tiktok and instagram is how some posters try to advertise for Onlyfans without triggering a FOSTA/SESTA related ban.
People pretend that speech is weightless and has no consequences, even when that's shown not to be entirely true.
Today it's 4chan, Kiwi Farms, WPD, pirate sports sites, libgen, Anna's place... Tomorrow it'll surely be every forum where moderation isn't absolutely draconian.
I wonder if they're going to try to ban Twitter.
See https://sdelano.media/suicideisbad/ (in Russian)
Those that are a net loss to the system we should enable to leave, as indeed the UK are on their own way towards Canadian style euthanasia.
It's often used as a way to anonymously imply that you should kill yourself, so I wonder if this sort of thing would affect that and how.
Though, overall, I think the censoring of self-harm stuff is already beyond ridiculous. Terms like "self-unalive", "self-terminate", "sewerslide" make a very serious issue sound like a joke. Blinding ourselves to isn't going to make these problems go away.
atemerev•4h ago
cs02rm0•4h ago
FridayoLeary•4h ago
pcdoodle•4h ago