It's also possible for the owners or employees of the company to be held liable if they ever visit the UK.
Or, you travel to the UK! It’s a pretty popular destination and Heathrow is a major European hub. It would be easy to get caught out.
The law may well be onerous and misguided. But looking at that site, it seems they reeeeally should do their due diligence. Not just to avoid the long arm of UK justice, but other territories too. It looks extremely dubious.
Their mitigation doesn’t make sense either. If they don’t shutdown UK accounts and those accounts use UK credit cards which continue to use the service via a VPN. It could be reasonably argued that they know they’re providing a service to a UK resident. So, they really need to do their homework.
What they’re actually complaining about is the cost of doing business. It sounds pretty amateurish.
Source: I was recently listening to a podcast about court cases and an extradition case came up. They mentioned that reasons to not be sent are mainly about human rights, like if you need medical treatment that the target country can't (or isn't likely) to give, or if you would have to sit out your punishment under inhumane conditions (oftentimes you can sit out your punishment in the country that extradited you after having stood trial in the other country's legal system), or if the country you're being sent to was involved in your torture (them handing you over to another party that you know does torture would count, even if their own hands are 'clean'; that was the crux in this case, whether the victim aka accused person was handed over by the united states to pakistani intelligence and the usa therefore had forfeited their claim to further try the person for the alleged crime)
I have no idea if this is a likely or even possible consequence, but that’s one way lots of people have gotten ensnared by the long arm of the law, even when jurisdiction is otherwise normally lacking.
You'd be well advised to avoid a country this like.
If the UK had remained in the EU, then extradition might be possible (depending on whether courts approve it) but right now it's pretty unambiguously impossible.
https://www.gov.uk/government/publications/international-mut...
Apparently Germany is also (at least as of 2023) not extraditing non-citizens to the UK due to the condition of British jails as well so: https://www.theguardian.com/society/2023/sep/05/germany-refu...
Having said that, it can limit a lot ones travel possibilities.
For an extreme case, ask Julian Assange what might happen if a country doesn't like what you put on the internet.
There was a lot of "vote splitting" and spoiler effect going on due to FPTP.
Labour have a very weak mandate.
There's a certain irony in our politics right now that FPTP has been maintained by two dominant political parties because it has served their purposes to have little real challenge from smaller parties despite those parties collectively having quite a lot of popular support. That same system has now all but ended one of those two dominant parties as a force in British politics and at the next election it might well do the same for the other. The scariest question is who we might then get instead if Labour don't force through a radical change to our voting system while they still have the (possibly last) chance.
thats literally all there is to it.
Also I don't know what sort of weight "guidance" from a reg agency vs statute carries there, how much of a defense it is, etc.
The UK doesn't have a First Amendment or a Bill of Rights other than the European Convention on Human Rights, that leading parties campaign of abolishing (if a bill of rights can be abolished by the legislature, it's not worth the paper it's printed on). Heck it doesn't even have a proper written constitution, it doesn't have separation of powers or an independent judiciary (the previous Parliament considered passing a law saying "Rwanda is a safe country to deport inconvenient asylum seekers to" in response to a court ruling (correctly) saying it manifestly isn't.
The UK and Australia are in a race to the bottom to see which one is going to be the worst enemy of the Internet. The only check against these authoritarian powers is popular juries, and they are trying to get rid of these as well.
https://hansard.parliament.uk/commons/2025-06-23/debates/250...
"with its members demonstrating a willingness to use violence"
As far as I am aware, have only damaged property. Have they actually committed any acts of violence or advocated violence?
It is embarassing for the UK military that they were able to get into a base and spray paint military planes.
Also, the group has directly harmed people too:
> A police officer was taken to hospital after being hit with a sledgehammer while responding to reports of criminal damage.
people have literally been jailed for "hate speech" here because they clicked like on a tweet. labour is currently debating an official definition of "islamophobia" which would criminalize stating historical facts like "islam was spread by the sword" and "the grooming gangs are mostly pakistani". the govt put out a superinjunction forbidding anyone (including MPs) from mentioning they spent £7B bringing over afghans allegedly at risk from the taliban, and also criminalizing mention of the gag order itself, and so on recursively. nobody (other than judges and senior ministers) knows how many other such superinjunctions there are.
all this and more is covered by those 17 categories.
on top of this, britain claims global jurisdiction here. think a minute how absurd that is -- any website anywhere that any briton might access is in scope, according to ofcom. and they claim the power to prosecute foreigners for these "crimes" ...
I am aware that someone was jailed for encouraging people via social media to burn down a hotel with refugees in ( https://www.bbc.co.uk/news/articles/cp3wkzgpjxvo ). But not because they clicked like on a Tweet. Reference please.
I don't know where I stand on this legislation - my gut is that it's too heavy handed and will miss the mark. But I think we need to stop saying this falls solely on parents. The internet is far too big, and parenting is far too varied for this to work. I wish it would, but it won't. There simply aren't enough parents that care enough.
Should society just be about the accumulation of wealth and no other human needs considered? Or, have I misunderstood your comment?
Are you sure?
By "good outcomes" you mean "addicted to the government"... which is not entirely surprising.
Maybe you got to peek at something that a friend's older brother had, or a friend knew where his dad's stash was.
But bottom line it wasn't easily available and took some effort and risk to get it, and that's even if your parents weren't doing anything to prevent it.
On the internet it's just there. Maybe you click "Yes I'm 18" but that's hardly a roadblock.
What these laws do is try to get back to the 1990s and earlier when access was much more limited. Parents want this, they vote for this. In that sense, they are doing something about it.
Yeah. It’s called “parenting”.
Beyond the obvious response about parenting: do they?
There was absolutely no restriction on the web when millenials were growing up, and we didn't become a generation of degenerates.
I'd like to see actual proof that there is a need for mass online protection for children.
Their business is creating virtual AI friends, often with sexually suggestive themes.
You can browse through the characters here: https://janitorai.com/
Would you want to let a lonely kid who might already have self confidence issues and problems making real-life friends loose on that site?
For a site operator who seems really concerned about potential liability under this law, I sure wouldn’t have put this in writing. Feels like it really undermines the rest of the post and the compliance measures being taken.
There's literally "Best VPNs For Accessing Porn in the UK" articles, they'll be fine.
Yes, but the recommendation was for the platform, not the users ("For a site operator who seems really concerned about potential liability", emphasis added). Spelling out how anyone (including minors) can get to the site without any age check may not be an issue for users' own good but a liability for the platform
Not saying I agree or disagree, just clarifying what the text above meant
People aren’t prevented from using VPNs in the UK, in case anyone is unclear on this.
I’m no prude, but I think this is a not-great thing to expose kids to, and the UK government is maybe not-terrible to want some sort of way to gate kids’ access.
it places a huge compliance burden on the 99.5% of small sites out there which are completely innocent
"janitor.ai" is not one of those sites...
an effective OSA should be targeted at sites like that
and saying "UK users, we can identify your accounts and have deliberately left them open cough VPN cough" isn't going to help them in the slightest (see Section 19)
Problem solved.
https://www.gov.uk/government/publications/online-safety-act... .
From what I'm reading, Amazon will have to implement age-checks over 8/10 of its book inventory, with the other 2/10 opening the company for liability about the very broad definition of "Age-appropriate experiences for children online." And yes, janitorai is correct that the act applies to them and the content they create, and a blanket ban to UK users seems the most appropriate course of action.
For what is worth, the act does not seem to apply to first-party websites, as long as visitors of that website are not allowed to interact with each other. So, say, a blog without a comment feed should be okay.
That's happening with the AI act here as well. Almost no-one wants to even touch the EU shitshow and they're still going forward with it. Even Mistral was trying to petition them, but the latest news seem like it had no effect. Fuck us I guess, right? Both consumers and SMBs will lose if this passes as is.
It isn't populist either, no-one supports this. The UK has media campaigns run by newspapers, no-one reads the papers but politicians so these campaigns start to influence politicians. Always the same: spontaneous media campaign across multiple newspapers (low impact on other kinds of media), child as a figurehead, and the law always has significant implications that are nothing to do with the publicly stated aim.
Democracy has very little to do with it. Elections happen in the UK but policies don't change, it is obvious why.
it's an extremely trivial thing to do and the ofcom guidance is very easy to follow.
I really dont understand why parents don't bare the responsibility of their kids internet access as opposed to the expectation the internet raises their kids...
I also don’t think this act is the way to address these issues, but I don’t think it’s as easy as just putting everything at the feet of the parents as I imagine it almost impossible to police at home, not to mention at school.
When I think of older technologies like television, we have rules and regulations about what can be shown when.
Again, this isn’t to say this approach is right, but wanting to regulate isn’t an attack on free speech. It seems there is regularly a tension on HN between free speech absolutists, usually from the US and those more happy to accept regulation, usually from the EU
I don't, because I don't need to.
I had unrestricted access to internet when I was my kids age and I turned out just fine, just like the extreme majority of my generation.
I know that serious discussions about important topics are enough to make sure that even if my kids do access content that's not meant for children, they're not negatively affected by them, just like I wasn't.
Again I don’t have kids, so I’m not in a position to judge, but I can only imagine the pressures on children are completely different nowadays. For example, we didn’t have computers in our pocket 24/7 with all of our peers on the other end influencing us indifferent ways.
Just as an aside, the UK has been doing this for decades. One of the most permanent causes of legislative failure in the UK has been making laws based on media pressure that have potentially huge costs in the future. And as these costs emerge, guess what the only solution is? More interventions.
I would suggest that a country which cannot control crime, cannot control borders, has a collapsing economy, cannot house people, etc. has bigger problems than trying to arrest people for saying things on Twitter or shut down end-to-end encryption.
the question of whether the state should try to fix this stuff is irrelevant, it cannot
Many kids from these backgrounds are seen as little more than cashcows to help their parents get homes and income without having to work.
If you turn having children into a profit-seeking enterprise, you will get bad outcomes. The UK is in the bizarre position where people who should have kids aren't having kids but they are paying for the children of those who shouldn't have kids.
The statistics coming out about this are incredible. We are looking at 20-30% of this cohort likely being unable to work at any point in their lifetime.
the solution to children stabbing each other was to attempt to ban knives
i believe they are also gearing up for a ban on cigars, cigarettes have a scale which will make them effectively illegal for children born today, more types of pornography are being banned
...this is the same country which has legalised heroin use regionally btw...you need a licence to watch porn but your neighbour can shoplift, shoot up heroin, and you pay his rent...
however bad you think it is, it is much worse
Platforms are also responsible for not allowing their services to be used to abuse children, which is also true offline.
As a wise man once said:
> The British legal system is and always has been a litany of injustices dressed up in formal attire. To be avoided at all costs.
This Twitter-style faux-casual way of writing is so common among AI people right now (see Sam Altman) and it’s extremely grating. I don’t know anything about this project, but if they really cared about their users, I would hope that they’d use capital letters and punctuation when addressing them in an official announcement.
Also:
> "and honestly? i..."
> "this is not just content moderation - it is a..."
These are two huge shibboleths of gpt-ese. He had to specifically tell the bot to write in that style.
eg the past decade has seen us remove “that” as a qualifier, and the word literally has become interchangeable with figuratively.
its worth considering whether you’re just losing touch…
The latter didn’t happen just in the last decade, and the former hasn’t happened at all.
But no, I can pretty confidently say that the English language still has capitalization and punctuation in it, it’s mostly just on Twitter and in AI-related blog posts where people write like this.
That kind of posturing is forgivable when you’re a teen. When you do it as the CEO of one of the most influential companies today, it’s grating. When you do it because you’re another CEO in a similar market and you’re trying to signal that you’re part of the “in” crowd, it’s frankly embarrassing.
Is that the case here (and it just happens that I have no clue what this particular site is about) ?
Or am I grossly misunderstanding the act (very likely I guess since IANAL) ?
[1] https://www.onlinesafetyact.net/analysis/categorisation-of-s...
-------------------------------------------
Ofcom’s advice to the Secretary of State
Ofcom submitted their advice – and the unerpinning research that had informed it – to the Secretary of State on 29 February 2024 and published it on 25 March. In summary, its advice is as follows: Category 1
Condition 1:
Use a content recommender system; and
Have more than 34m UK users on the U2U part of the service
Condition 2: Allow users to forward or reshare UGC; and
Use a content recommender system; and
Have more than 7m UK users on the U2U part of the service
Ofcom estimates that there are 9 services captured by condition 1 and 12-16 likely to be captured by condition 2. There is one small reference in the annex that the 7m+ monthly users threshold corresponds to the DSA (A6.15)
Category 2a (search) Not a vertical search service; and
Have more then 7m UK users
Ofcom estimates that there are just 2 search services that currently sit (a long way) above this threshold but that it is justified to put it at this level to catch emerging services.
Category 2b (children) Allow users to send direct messages; and
Have more than 3m UK users on the U2U part of the service
Ofcom estimates that there are “approximately 25-40 services” that may meet this threshold.-------------------------------------------
[1]: https://ofcomlive.my.salesforce-sites.com/formentry/Regulati...
Otherwise everyone from small sites to Facebook would just shop around jurisdictions and formally operate their websites from wherever fits them best.
And if a service is used by citizens of your country it makes sense to scale requirements by the impact it is having on them.
This particular law may not be great overall but I've got no issues with this method. As a site provider outside the UK it's trivially easy to avoid liability (by blocking people)
However I'd argue that from the UK perspective any of these developed countries would be considered reg-shoped.
They don't consider other countries' laws as sufficient to protect their citizens (which I probably wouldn't agree with).
I'm struggling to see a better approach to make sure any law applicable to online services is actually effective.
It seems like there are only a few options:
1. Only pass globally agreed upon laws (which is basically nothing and not realistic)
2. Only regulate domestic providers (which would severely disadvantage them and not resolve the underlying issues)
3. Block all foreign services (which is even more drastic).
In the end a states' power is tied to its territory and the people living within it. If not being active in that state at all releases you from it's laws I would consider this appropriate.
This is on the UK to fix (or not fix) for themselves.
I wouldn't be surprised if this ends up being a topic in trade negotiations with the US in the future though, since this is a trade barrier that imposes significant regulatory cost on US companies for content that is legal in the US. Eg. the proscribed categories of illegal content include knives and firearms, hate, etc.
what ruling are you referring to? This is about the Online Safety Act, an act of parliament.
https://www.ofcom.org.uk/siteassets/resources/documents/cons...
Everybody (who's not specifically exempted by Schedule 1, which has nothing to do with what you linked to) gets a "duty of care".Everybody has to do a crapton of specific stupid (and expensive) administrative stuff. Oh, and by the way you'd better pay a lawyer to make sure that any Schedule 1 (or other) exemption you're relying on actually applies to you. Which they may not even be able to say because of general vagueness and corner cases that the drafters didn't think of.
Also, it's not a "ruling". It's a law with some implementing regulations.
Then there are additional requirements applied to 3 classes of services: Category 1, 2A, 2B. The latter have the thresholds as discussed above.
But, as usual, poorly written. eg a "Content Recommendation System" -- if you choose, via any method, to show content to users, you have built a recommendation system. See eg wikimedia's concern that showing a picture of the day on a homepage is a bonafide content recommendation system.
The definition
> (2)In paragraph (1), a “content recommender system” means a system, used by the provider of a regulated user-to-user service in respect of the user-to-user part of that service, that uses algorithms which by means of machine learning or other techniques determines, or otherwise affects, the way in which regulated user-generated content of a user, whether alone or with other content, may be encountered by other users of the service.
https://www.legislation.gov.uk/uksi/2025/226/regulation/3/ma...
If you in any way display UGC, it's essentially impossible not to do that. Because you pick which UGC to display somehow.
That said the thinking that smaller platform should equal exemptions seems a touch flawed too given topic. If you're setting out to protect a child from content that say is promoting suicide the size of the platform isn't a relevant metric. If anything the smaller less visible corners (like the various chan sites) of the internet may even be higher risk
Take something like a plastic packaging tax, for example. A company like Amazon won't have too much trouble setting up a team to take care of this, and they can be taxed by the gram and by the material. But expecting the same from a mom-and-pop store is unreasonable - the fee isn't the problem, but the administrative overhead can be enough to kill them. Offering an alternative fixed-fee structure for companies below certain revenue thresholds would solve that problem, while still remaining true to the original intention.
But playing devils advocate a bit here if the risk profile to the kid is the same on big and small platforms then there isn't any ethical room a lighter regime. Never mind full exemption, any exemption. The whole line of reasoning that you can't afford it therefore more kids potentially getting hurt on your platform is more acceptable just doesn't play. And similarly if you do provide a lighter touch regime, then the big players will rightly say well if that is adequate to ensure safety then why exactly can't we do that too?
Platform size just isn't a relevant metric on some topics - child safety being one of them. Ethically whether a child is exposed to harm on a small or big website is the same thing.
Not that I think this act will do much of anything for child safety. Which is why I think this needs to go back to drawing board entirely. Cause if we're not effectively protecting children yet killing businesses (and freedoms) then wtf are we doing
Agreed, but I guess it could be the case that the current regulation is too burdensome even for large corps, but they could afford to have the resources necessary to deal with the regulation?
This, or course, would disproportionately burden smaller buildings, while some larger buildings would have little trouble to comply. Guess who would complain more often. But it, while outwardly insane, would clear small huts off the market, while the owners of large reinforced buildings would be able to reclaim the land, as if by an unintended consequence.
Driving the risk tolerance of a society lower and lower interestingly dovetails with the ease of regulatory capture by large incumbent players, as if by coincidence.
The UK is a leading example of what calls for regulation turn into in the real world.
I’ve noticed a lot of calls for regulation in Hacker News comments lately. In the past week I’ve read multiple threads here where people angrily called for regulations and consequence for anything LLM related they didn’t like: When LLMs produced mistakes, when they produce content too close to copyrighted works, and so on.
There’s an idea that regulation is a magical function that you apply and then the big companies suffer consequences, products improve to perfection to avoid the regulations, and nothing is lost for consumers.
Then you look at real-world heavy handed technology regulation and see what really happens: Companies just have to turn off access to countries with those regulations and continue on with their business. People who use the tools get VPNs and continue operating with a little extra hassle, cost, and lag. Businesses avoid those countries or shut down because it doesn’t make sense to try to comply.
There’s a constant moving of goalposts, too: Every time someone points out the downsides of these regulations it’s imagined that better regulation would have exempted the small companies or made it cheap to comply (without details, of course).
I think heavy handed technology regulation is yet another topic where the closer it gets to reality, the less people like it. When you point out real world examples, the response is always “No, not like that”
By "changing your government" I don't mean "shuffle people in and out of parliament" or even "elect your 6th Prime Minister in 10 years".
I mean change your government.
It depends what you mean by government but elected officials in the UK are almost completely irrelevant in this (and in most other things, their job is to get in front of a TV camera say how appalled they are that it has happened, no-one could have foreseen this, don't look back in anger, and that they are going to select from the same policy options that the Civil Service presented the last government with...which results in the same conclusion: more civil servants, more regulation, more corruption).
Ofcom has been making a massive power grab, this bill and other recent regulations are granting them massive new powers, and the UK has a system in which ministers have no functional capacity to block this.
Don’t fall into the trap of thinking that this law must have come about by sinister machinations just because you don’t think it’s a good law.
Elected officials are irrelevant because, in this case, they are functionally unable to propose a reasonable alternative. Policy options were presented, all of the policy options offered in this case were to empower Ofcom (again, how old are you? are you unable to think of another situation like this: same thing happened with BoE/PRA in 2010, same thing happened with Border Force creation, same thing happened with illegal immigration where the only functional action presented was to increase Home Office staffing, I can go on and on) so what was the alternative?
You also may not understand what is going on here either: this legislation gave Ofcom new statutory powers but what you might not be aware of is that Ofcom has also been granted through non-statutory instruments significant new powers that interlock with this legislation...who voted on that? Again: media campaign by the powers that be, multiple Home Office ministers have been railroaded into this, Ofcom/security services have been granted new powers (the latter by the back door...again, tell me what that has to do with "social media abuse"? nothing).
Finally, elected officials are mostly pawns. The majority of "them" do not know what is in the legislation. They have been told the PM wants "Molly's Law" passed, so it is passed. Some of the scrutiny was laughable...do you understand that the law has a provision that requires tech companies to provide "end-to-end encryption" that the government can break? To their credit, this was questioned by legislators several times...it wasn't removed from the Bill, despite it obviously being unrelated to anything in the Act. Again, how are you this blind? Do you see why elected officials might be irrelevant if you are so blind?
Also, none of these laws are led by politicians. All the communications-related offences were led by security services. Do you know when Molly actually passed? 2017? And you are telling me that a law passed in 2025 is related? A law which largely contains things unrelated to her passing (i.e. significant expansion of Ofcom powers).
It is genuinely funny that people who have no idea about politics...as in first-hand knowledge which will just imagine that everything works the way they assume it must work. At the very least, you should exercise some common sense: we have seen multiple Home Office ministers pass through, ministers are gaining no new extra powers, Ofcom staffing has gone up by thousands, the amount of graft that has already gone on is staggering...but the one's to blame are, conveniently, the ones who are always blamed...but seem to be strangely unable to do anything...you cannot possibly be this credulous.
Top management can never go against the RCSR guys, who are like priests of the church in medieval ages. And the RCSR guys have no goals linked to the progress of the real work. The don't like any thing that moves. It's a risk.
Management thinks that RCSR helps with controls around the work. But what happens is, you put more people in building controls, they deliver fort walls around your garbage bins.
In other words, the guys who actually believe in a normal level of compliance are hamstrung, defection is rewarded, classic failure mode of this kind of thing.
It's like how the growth of a tree trunk happens in a thin layer beneath the bark and the rest is inert wood.
But if the regulation is indeed oppressive or byzantine, everybody hurts and only the biggest survive.
*Social contagion effects on risk perception can be a confounding factor here, though.
Jeez, and I was just grumbling how it takes up to 24 hours for a change I make to appear in prod, compared to about 4 hours at my previous job.
It should be possible to develop automated systems and processes to keep most changes on the “happy path” where approvals are quick. These sorts of organization responses were a new layer of people who can say “no” grow whenever a problem happens or a regulation changes are suffocating.
Because the policy teams don't understand the technical details, they get hung up on things that don't matter so all your "security budget" (in terms of dev effort available for improvement) gets spent on useless things.
Because there is a CIS guideline recommending "do not put secrets in environment variables" in k8s, a team I work with recently spent two weeks modifying deployments of all their third party charts to mount all secrets as volumes, this included modifying / patching upstream helm charts. This will be a maintenance burden forever lol. Actual security benefit in context is approximately zero.
Meanwhile they COULD have spent that time implementing broadly effective things like NetworkPolicy or CSP for the front-end but now there is no dev time for that.
At the top levels of their process there are plenty of risk x impact matrices but there aren't corresponding effort / payoff assessments, so the things being done end up not being engineering.
Not if they never ever ship an update.
(i - briefly - worked for place that did things this way. They didn't update the chart, but they _did_ use `sed` to update the image tag from time to time.)
Thus the only reasonable course of action was to do nothing.
Legal frameworks are incredibly irritating and often defined by people who know nothing about what they're regulating. This can lead to very bad laws.
Given that we live in the real world and one can be sued for violating those laws, the stakes are quite high and low cost. Many states in the US allow individuals to bring suit based on these laws, meaning all that needs to happen is you make a mistake and some rando has time to hire a lawyer.
And that is, by the way, notwithstanding the reality that many of the annoying compliance requirements are actually not all that annoying in principle, they are annoying to implement because many software development practices involve the free flow of data.
Maybe that was fine when it was public blogs. But when it's someone's medical records or their financial data, or when it lands in the bucket of a Cambridge Analytica type, sorry, there's a higher burden.
It is frustrating, I know. But we as engineers need to take responsibility for the consequences and stakes of what we're building. It's lack of that awareness that caused many of the largest controversies in our industry.
Reality, as always, as much more nuanced than this hot take. It's a balancing act.
Just because it's become really easy to spin-up a business doesn't mean your business should be allowed to ignore laws - regardless of your opinion on them. This should apply even more strictly to businesses selling/providing age restricted items. A small tobacconist is subject to the same laws around selling tobacco as a large supermarket. Why should this be different online?
I can understand basic disagreements with the general usefulness of the new law, but "I'm just a little guy" is a poor argument. Designing the law so you can only get your creepy AI porn from small businesses defeats the purpose of it.
It is actually a really bad law.
https://www.thehamsterforum.com/threads/big-sad-forum-news-o...
(Yes, the UK has effectively made it illegal to run a forum for people with pet hamsters.)
They deal with compliance in their terms and apparently did a reboot and apparently introduced new modding tools.
Did they reach the conclusion that they are not pose significant risks and that their tools are sufficient?
I'm very curious
> Your online service has links with the UK if:
> UK users are a target market for your service; or
> It has a significant number of UK users
What is "significant"? Is it a percentage or a raw number?
I'll click "no" - maybe 5% of my users are from the UK. Great, wizard complete! I don't need to worry...except:
> Please note that this result is indicative only
We have a free-to-use technical forum for our (data wrangling software) forum, powered by Discourse. I believe that might allow users to directly message each other. Does that count?
Only escape hatch they give you later is if all communication between users is delegated to other medium, eg. email or phone or sms.
I don't agree with everything in the Online Safety Act, but if anything needed a risk assessment, it's surely this?
[0] “law” here isn’t just laws made by governments, but also regulations made by e.g. Visa and Mastercard
Are there services which offer a less... risky... service that are similarly affected here?
You can read their community discussion at https://lobste.rs/s/ukosa1/uk_users_lobsters_needs_your_help...
This firm doesn’t care a whit about the impact on users - they are just too cheap to follow the rules.
If your business can’t operate without regulation it shouldn’t operate at all because it clearly relies too heavily on exploiting labour and or consumers
Of course that affects attitudes here, even if most people on here will never actually be a founder, let alone a highly successful one.
I'm old enough to remember when one of great things about the web was the low barrier to entry.
Not every site has a large company with deep pockets behind it. Some of the websites I've run, I've run at a loss because I was interested in the subject and thought it provided real value for other people. Probably the income from these sites was in the hundreds of dollars a year range, the cost in time and effort waaay beyond that.
I don't know the actual compliance costs here - I know the cost of a UK lawyer just to review obligations and liabilities is probably going to be a few hundred quid, if not substantially more. I don't know of many non-professional, or FOSS sites that could afford that.
Your curt dismissal of this huge chunk of the internet saying they shouldn't be operating at all is mind boggling
The wild west of the internet was largely a mistake and created massive social disruption for the benefit of a tiny few and was caused by regulators being asleep at the switch. It is good they are finally catching up.
The western internet as we knew it is dead. Privacy is dead, we already live in a post-Snowden panopticon. With multiple always-on microphones in every public room and often in private rooms too. HD cameras are everywhere. If you live in any major city, hundreds to thousands of hours of footage, which might contain you, is being uploaded for public view and AI training daily.
There have been other open source deathblow laws passed in the EU like the Cyber Resilience Act and the Product Liability Directive which have been repeatedly dismissed by other commenters on hn. Earlier stuff like the GDPR was dismissed too as only affecting big companies. The arguments in support of these laws have basically been you are small so you don't need to follow them. That seems like lot of disrespect for the EU's legal system but maybe it's well deserved.
It's only a matter of time until ID verification will be mandated even to make a post like this one on sites like hn. Western companies assisted authoritarian prison states in monitoring, censoring and controlling their citizens, when they should have been doing the exact opposite. Now it's really hard to argue that it isn't possible here.
Another example: here in the USA we have 50 states vying to each regulate AI; my understanding is that the plan was to have the OBBB put them off from doing this for at least 10 years, but that effort failed, leaving them free to each do their own thing.
Complying with all rules and regulations in all jurisdictions to which a website/service could be exposed (i.e. worldwide, by default) seems like it's becoming a well nigh impossible task these days.
Cyberspace is not, eventually, independent[1] if you plan to make money off it and use real world IDs.
I'm broadly in favor, I think. This species can have instant global communications once it's shown itself equal to the responsibility that implies.
It was fine when the Internet was a side hobby for a slice of the population, but now that it’s fundamental to all aspects of life, having it under the control of a foreign government (and the tech companies which act as de facto organs of that government) is no longer acceptable.
Also, at $1.50/user, what if users pay for accounts? Perhaps this will be the Band-Aid ripping moment that ushers in sane communities that are no longer under attack by endless bad actors with infinite free accounts?
Having to verify your identity (without it being exposed to other users) sounds like it could reduce endless botting.
On the other hand such a system is basically begging to be abused by bad actors (state and non state)
Strawmanning their position isn't helpful to anyone.
There are normally two shift keys on your keyboard, try either one.
smallpipe•5h ago