Same people now: how will the poor company know that it's an underage user?? Oh noes!
The platforms asks your government if you're old enough. You identify yourself to your government. Your government responds to the question with a single Boolean.
It would be possible for them to provide an open-source app, but design the cryptography in such a way that you couldn't deploy it anyway. That would make it rather pointless.
I too hope they design that into the system, which the danish authorities unfortunately don't have a good track record of doing.
If the app is open source, what stops someone from modifying it to always claim the user is over 18 without an ID?
And using someone else's Id and password is the same as every method of auth
Source: I wrote Digitaliseringsstyrelsen in Denmark where this solution will be implemented next year as a pilot, and they confirm that the truly anonymous solution will not be offered on other platforms.
Digitaliseringsstyrelsen and EU is truly, utterly fucking us all over by locking us in to the trusted competing platforms offered by the current American duopoly on the smartphone market.
https://digst.dk/it-loesninger/den-digitale-identitetstegneb...
The difference is meaningful. It's mostly prisoners dilemma. If only one persons porn habit is available thats bad for them. If everyones (legal) porn habits are available, then it gets normalized.
The problem isn't my peers, it's the people in power and how many of them lack any scruples.
this is too narrow a view on the issue. the problem isn't that a colleague, acquaintance, neighbor, or government employee is going to snoop through your data. the problem is that once any government has everyone's data, they will feed it to PRISM-esque systems and use it to accurately model the population, granting the power to predict and shape future events.
the social media platforms already measure more than enough signals to understand a users likely age. they could be required by law to do something about it
It seems to me like it's either a privacy disaster waiting to happen (if not required) or everyone but the biggest players throwing out a lot of bathwater with very little baby by simply not accepting Danish users (if required).
The wording on the page also makes it sound like their threat model doesn't include themselves as a potential threat actor. I absolutely wouldn't want to reveal my complete identity to just anyone requesting it, which the digital ID solution seems to have covered, but I also don't want the issuer of the age attestation to know anything about my browsing habits, which the description doesn't address.
Denmark's constitution does have a privacy paragraph, but it explicitly mentions telephone and telegraph, as well as letters.[2] Turns out online messaging doesn't count. It'd be a funny one to get to whatever court, because hopefully someone there will have a brain and use it, but it wouldn't be the first time someone didn't.
[1] https://boingboing.net/2025/09/15/danish-justice-minister-we...
Regardless, this wouldn't run afoul of this. This is similar to restricting who can buy alcohol, based purely on age; the identification process is just digital. MitID - the Danish digital identification infrastructure - allows an service to request specific details about another purpose; such as their age or just a boolean value whether they are old enough. Essentially: the service can ask "is this user 18 or older?" and the ID service can respond yes or no, without providing any other PII.
That's the theory at least; nothing about snooping private communication, but rather forcing the "bouncer" to actually check IDs.
That has nothing to do with the medium of the ticket and is all about knowingly presenting a fake ticket. The ticket is a document proving your payment for travel. They could be lumps of dirt and it would still be document fraud to present a fake hand of dirt.
> That's the theory at least; nothing about snooping private communication, but rather forcing the "bouncing" to actually check IDs.
Hopefully the theory will reflect the real world. The 'return bool' to 'isUser15+()' is probably the best we can hope for, and should prevent the obvious problems, but there can always be more shady dealings on the backend (as if there aren't enough of those already).
That's very much not how danish law works. The specific paragraph says "hvor ingen lov hjemler en særegen undtaglse, alene ske efter en retskendelse." translated as "where no other law grants a special exemption, only happen with a warrant". That is, you can open peoples private mail and enter their private residence, but you have to ask a judge first.
The relevant points I believe to be:
> All citizens are placed under suspicion, without cause, of possibly having committed a crime. Text and photo filters monitor all messages, without exception. No judge is required to order to such monitoring – contrary to the analog world which guarantees the privacy of correspondence and the confidentiality of written communications.
And:
> The confidentiality of private electronic correspondence is being sacrificed. Users of messenger, chat and e-mail services risk having their private messages read and analyzed. Sensitive photos and text content could be forwarded to unknown entities worldwide and can fall into the wrong hands.
> No judge is required to order to such monitoring
That sounds quite extreme, I just can't square that with what I can actually read in the proposal.
> the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State
It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.
That all sounds extremely boring and political, but the essence is that it mandates a local authority to scan messages on platforms that are likely to contain child pornography. That's not a blanket scan of all messages everywhere.
So every platform, everywhere? Facebook and Twitter/X still have problems keeping up with this, Matrix constantly has to block rooms from the public directory, Mastodon mods have plenty of horror stories. Any platform with UGC will face this issue, but it’s not a good reason to compromise E2EE or mandate intrusive scanning of private messages.
I would not be so opposed to mandated scans of public posts on large platforms, as image floods are still a somewhat common form of harassment (though not as common as it once was).
It therefore breaks EtoE as it intercepts the messages on your device and sends them off to whatever 3rd party they are planning to use before those are encrypted and sent to the recipient.
> It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.
How can a judge be involved when we are talking about scanning hundreds of millions if not billions of messages each day? That does not make any sense.
I suggest you re-read the Chat control proposal because I believe you are mistaken if you think that a judge is involved in this process.
I dispute that. The proposal explicitly states it has to be true that "it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material;"
> How can a judge be involved
Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.
I suggest YOU read the proposal, at least once.
> it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material
That is an absolute vague definition that basically encompasses all services available today including messaging providers, email providers and so on. Anything can be used to send pictures these days. So therefore anything can be targeted, ergo it is a complete breach of privacy.
> Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.
Your assertion makes no sense. The only way to know if a message contains something inappropriate is to scan it before it is encrypted. Therefore all messages have to be scanned to know if something inappropriate is in it.
A judge, if necessary, would only be participating in this whole charade at the end of the process not when the scanning happens.
This is taken verbatim from the proposal that you can find here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A20...
> [...] By introducing an obligation for providers to detect, report, block and remove child sexual abuse material from their services, .....
It is an obligation to scan not a choice based some someone's opinion like a judge, ergo no one is involved at all in the scanning process. There is no due process in this process and everyone is under surveillance.
> [...] The EU Centre should work closely with Europol. It will receive the reports from providers, check them to avoid reporting obvious false positives and forward them to Europol as well as to national law enforcement authorities.
Again here no judge involved. The scanning is automated and happens automatically for everyone. Reports will be forwarded automatically.
> [...] only take steps to identify any user in case potential online child sexual abuse is detected
To identify a user who may or may not have shared something inappropriate, that means that they know who the sender is, who the recipient was , what bthe essage contained and when it happened. Therefore it s a complete bypass of EtoE.
This is the same exact thing that we are seeing know with the age requirements for social media. If you want to ban kids who are 16 years old and under then you need to scan everyone's ID in order to know how old everyone is so that you can stop them from using the service.
With scanning, it is exactly the same. If you want to prevent the dissemination of CSAM material on a platform, then you have to know what is in each and every message so that you can detect it and report it as described in my quotes above.
Therefore it means that everyone's messages will be scanned either by the services themselves or this task will be outsourced to a 3rd party business who will be in charge of scanning, cataloging and reporting their finding to the authorities. Either way the scanning will happen.
I am not sure how you can argue that this is not the case. Hundreds of security researchers have spent the better part of the last 3 years warning against such a proposal, are you so sure about yourself that you think they are all wrong?
Censorship really is one of the few laws that are pretty unambiguous, that's really just "No, never again". Not that this stops politicians, but that's a separate debate.
And this is why laws should always include their justification.
The intent was clearly to protect people - to make sure the balance of power does not fall too much in the government's favor that it can silence dissent before it gets organized enough to remove the government (whether legally or illegally does not matter), even if that meant some crimes go unpunished.
These rules were created because most current democratic governments were created by people overthrowing previous dictatorships (whether a dictator calls himself king, president or general secretary does not matter) and they knew very well that even the government they create might need to be overthrown in the future.
Now the governments are intentionally sidestepping these rules because:
- Every organization's primary goal is its own continued existence.
- Every organization's secondary goal is the protection of its members.
- Any officially stated goals are tertiary.
the horror!
The serious answer is that banning "social media" is a bit silly. We should concentrate on controlling the addictive aspects of it, and ensuring the algorithms are fair and governed by the people.
I'm not entirely sure how I'd want to word it, but it would be something like: It is prohibited to profit from engagement generated by triggering negative emotions in the public.
You should be free to run a rage-bait forum, but you cannot profit from it, as that would potentially generate a perverse incentive to undermine trust in society. You can do it for free, to ensure that people can voice their dissatisfaction with the government, working conditions, billionaires, the HOA and so on. I'd carve out a slight exception for unions being allowed to spend membership fees to run such forums.
Also politicians should be banned from social media. They can run their own websites.
To me there is no question that children should grow up protected from harmful substances. You don't want kids to smoke, scrolling algo feeds is not better. There is enough interesting internet out there without social media!
You know how in school they used to tell us we can't use calculators to solve math problems? Same thing. It can't be done by individual parents either, because then kids would get envious and that in itself would cause more problems than it would solve.
It is important for kids to get bored, to socialize in person, to solve problems the hard way, and develop the mental-muscles they need to not only function, but to make best use of modern technology.
It is also important that parents don't use technology to raise their children (includes TV). Most parents just give their kids a tablet with youtube these days.
> they learn how to function without technological dependencies.
So like the Amish? Or are they still too technologically dependent and children need to be banned from pulleys, fulcrums, wheels, etc.?
Remember YikYak? IIRC that was worse for kids than most of the big social media sites, but how do you write a law that anticipates the next YikYak without banning everything?
As someone who got my first BlackBerry at 11, which really spurred a lot of my later interests which are now part of my career or led to it indirectly, I am opposed to paternalistic authoritarian governments making choices for everyone.
(Funny anecdote, but I didn't even figure out how to sign up for Facebook until I was 11-12, because I wouldn't lie about my age and it would tell me I was too young. Heh.)
I highly recommend discussing a smartphone pact such as http://waituntil8th.org with fellow parents before anyone in their friend group gets a cell phone.
Give people technology, but let's have an honest conversation about it finally. As a adult it's already hard to muster enough self control to not keep scrolling.
I don't scroll social media. When I was 14-17, sure. But then I lost interest, much like most of my peers did.
(I do probably refresh HN more than I should though, but I think that's probably the least evil thing I could do compulsively...)
You're either operating with an anachronistic notion of what constitutes social media, or you're very out of touch with the public. Not sure which one.
The "myspaces" and "facebooks" are trending down, but other forms of social media like tiktok, discord, reddit, youtube, etc are alive and well, still hooking kids young as they always have.
What we define as "social media" I think is important. I don't really consider things like TikTok to be "social media" even if there is both a social component and a media component, since the social part is much smaller in comparison to the media part. People aren't communicating on TikTok (I think), which is what people concerned about "being left out by their peers" would be referring to. This type of "social" media probably is not dying, but I think is likely stagnant or will become stagnant in growth, while traditional "social media" continues to regress over the next decade.
Parents are doing what they can, but it inevitably comes down to “but my friend x has it so why can’t I have it” - so all and any help from government / schools is a good thing.
This is so, so, so obviously a nasty, dangerous technology - young brains should absolutely not be exposed to it. In all honesty, neither should older ones, but that’s not what we’re considering here.
Do you buy your kids a toy every time you go to the store? Do you feed them candy for dinner?
Easy. If half the conversation happens online, and your kid wasn’t part of that, they’d constantly need to be “filled in” when they got to school.
Imagine if your company used slack but you weren’t on it. You could still go to all the meetings, but there would have been conversations held and decisions made that you wouldn’t even know about. You would feel like you were on the out. Banning an individual kid from social media would be just the same.
Ah, bliss...
You probably aren’t familiar with this, as it’s somewhat of a secret, but parents have a unique tool they can use called “no.”
IMO it’s much better - for everyone - to ban this stuff at the community level. Then there’s no FOMO.
I’m old enough to remember the same trash arguments over video games, rap music, even (for some unknown reason) the Disney Channel. This is just another moral panic.
Yes, “no” is a tool that more parents can and should reach for. But if you’ve got any experience at all of kids you’ll know it’s really not as straight forward as this. The more responsibility you can push off to others, such as government or schools, the easier this is.
We brought ours up with pretty strong guidelines and lots of “no” but we’re fortunate in having some time and some money and some knowledge about how to block stuff on the network and so on - lots of parents aren’t as lucky. They need all the help they can get.
* Kid who is told "no" by his parents
* Kid who is told "yes" by his parents
* Kid who "can't" sign up for social media because it's illegal to do so at their age, who then signs up for it when it becomes legal.
I would really like to see what you believe the outcomes of these three scenarios would be, because I doubt any of them are truly catastrophic, considering we are, at best, merely delaying the onset of social media use by the kid by just 2-3 years.
Personally I want to do something about this, and IMO every move in the direction that helps even in a small way is a good one.
It sucks ass being a parent now. I literally had someone interrogate my kid when I let them pretend to walk independently part of the way from school on our own property -- I was actually watching but from behind the bushes so she could have some freedom but still with oversight, and I had to pop up and deal with the Karen before she could get around to calling CPS or whatever else she had in mind.
The real problem here is way less people are parents or people that have no idea what parenting is like, so they don't understand the practicalities of raising children so they come up with the dumbest laws possible and then lord it over you with the full weigh of the state so they can pretend to be parents but with none of the responsibility and all of the smug moral superiority.
Second, this move by Denmark reflects a failure to regulate what social media companies have been doing to all their users.
e.g. What has Meta done to address their failures in Myanmar?[1] As little as was legally possible, and that was as close to nothing as makes no difference. More recently, Meta's own projections indicate 10% of their ad revenue comes from fraud[2]. The real proportion is almost certainly higher, but Meta refuses to take action.
Any attempts to tax or regulate American social media companies has invited swift and merciless response from the U.S. government. To make matters worse, U.S. law makes it impossible for American companies to respect the privacy of consumers in non-U.S. markets[3].
Put it all together, and American social media is something that children need to be protected from, but the only way to protect them is to cut them off from it entirely. This is the direct result of companies like Meta refusing to respond to concerns in any way other than lobbying the U.S. government to bully other nations into accepting their products as is.
Good on Denmark. I hope my own country follows suit.
------------
[1]https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...
[2]https://www.reuters.com/investigations/meta-is-earning-fortu...
[3]https://www.forbes.com/sites/emmawoollacott/2025/07/22/micro...
Bans like this make much more sense at a community level. Not an individual level.
The reason most parents give up to regulate their children's online activity is because the children end up isolated if an individual household prevents their kid from socializing online. All the other kids are online, therefore switching individually ends in isolation. What might be beneficial for each household is unworkable as long as there is no collective mechanism. (which is the case for virtually every problem caused by social networks)
This one hit me recently. My 4th grader has a friend who is on tik-toc and has a phone. Me, living in a bubble, where other parents I've met are terrified of social media and phones for their kids, was shocked when I met the mom and she wasn't aware of all the negative impact of social media. But, like with smokers, you can tell them it's bad for you but it's up to them to quit.
It's absolutely a collective action problem.
Everybody wants to get on the wave about how children these days are so much worse because of the new thing.
And literally as long as we have recorded human writing we have adults complaining how the children are being ruined by the new culture or new item... and I mean we have these complaints from thousands of years ago.
So be careful, you don't have to be completely wrong to still be overreaching.
Sure, but we (as societies) have always had to deal with this. Wherever you are in the world there are things that simply aren't allowed under a certain age, whether that's 15, 16, 18, 21 or whatever.
My (just turned) 16 year old told me last that he didn't think it looked to be that hard to drive a car.
Me: "Umm. You'll find out. When you get to it."
If none of your child's friends and classmates have cell phones yet, I'd strongly encourage establishing a smartphone pact with the other parents ASAP. Our community used http://waituntil8th.org pledges but even a shared spreadsheet would work.
If you don't have that you get the rules destroyed by demanding parents bullying administrators and school boards.
The problem is that certain platforms exploit people for profit by feeding them crap, from political propaganda to ads for weight loss drugs. Many of them are designed to be addictive so folks can keep up "engagement". Enough eyeballs make all crap profitable, or something like that.
On the other end of the spectrum, there are tons of great platforms that young people can benefit from, and vice versa. Including HN. Many subreddits. Tons of forums. Loads online games.
Ban the exploitation. Ban the propaganda. Ban the abuse. But don't ban young people.
But of all the problematic advertising you could choose, you choose instead political advertising and semaglutide ads?
I presume text messaging doesn't count whereas Discord/WhatsApp do? What about Minecraft and other games? What about school platforms which they can post comments/messages on? Is watching YouTube included? When I've filled in surveys about our children's social media use, they have included YouTube, which makes it look like every child is on social media.
This makes it almost sound like a no-op once enough children convince their parents to give exemptions. Hopefully it works out better than that.
Now we know, of course, and everything in hindsight is 20/20.
It's STILL worth trying to regulate social media, now emboldened and firmly established as a rite of passage among youth, adults, and older generations.
Basically, when network connectivity increases, the "bad" nodes can overwhelm the "good" nodes. The other ideas discussed are really interesting; well worth watching.
Aldipower•2h ago