The signed material does not contain any identifiable information about you, and sites like pornhub can verify the token with the identity provider to verify your age.
You could argue that the sites requesting access tokens won’t be cached/s but I’m practice that’s not how it’ll work. You could also have a separate request-forwarder service that sits between the age-verifier and the site-that-you-don’t-want-logged, I guess, which would make it harder to get all the required info in one place.
Require a token to be provided by the requester that is used to sign the response token so its limited to a single use
But if a kid dies of alcohol poisoning or drunk driving, you can certainly get in serious legal trouble. Those two things (not wanting kids to be harmed by alcohol, and not wanting legal trouble) stop a very large number of adults from giving minors alcohol.
I also would personally prefer we not destroy the internet in pursuit of that goal.
You put severe penalties on the crime, then you catch people doing the crime. Offer a reward fo catching people, and I'm sure a few kids will hand people in for the reward. They'll be able to prove they got a token from someone (as they'll have it), then we investigate.
How does it guarantee anonymity?
i.e. in this scenario will they know that my token was passed to PornHub?
Look up Zero Knowledge Proofs, or Kagi's Privacy Pass, if you want to see details.
Setting aside whether age verification is desirable or a net benefit, some of the discourse is colored by folks that want to make it as painful and controversial as possible so they don’t have to do it.
[1] https://americarenewing.com/issues/identity-on-the-internet-...
I have a semantic question, though. If I get tokens from an identity provider which I then pass to an adult website, is that really a "zero knowledge" proof? It's been a while, but I don't think that's a zero knowledge proof. Or maybe it is? I'm not sure what the formal definition is.
The token isn't one that you receive and use as-is, so there's no way for the token's generator to tie it to your identity. And when redeemed, the generator can only confirm the token is valid, not that you made it (and therefore what service you're using). Kagi has some articles on the technical details for their "Privacy Pass" feature.
However, using a VPN and pre-generating tokens is still recommended to prevent side-channel attacks based on timing.
I have two, one of them was fine watching people's faces melting in Raiders of the lost Ark when he was 6, the other had nightmares for a couple days after seeing Gollum in LoTR.
The regulations on age are by necessity arbitrary, but I don't think they're completely stupid, even tho I agree parents should be the one responsible in the end.
If the token signifies you are 18+ and nothing else and if the generation limits are such as to be reasonable then people will generate some fraction of their total tokens just to sell them, or use their elderly relative's tokens.
The kids will be trading these tokens to each other in no time. Token marketplaces will emerge. The 18+ function of the token will just become a money/value carrier.
If you limit it to one token per person, the privacy implications will be devastating. All online presence where being 18+ is required will be linked.
So, I don't accept that this is even an acceptable idea. I hate that we are attempting to 'solutionize' on top of bad assumptions, as with this well-meaning article.
The real issue is that there is no proving that this is a 'good thing' to be done -there is no discussion of the loss of privacy rights. It is already decided that de-anonymising is a good thing for corporations and governments, so the rest is just excuses.
This is actually use of manipulation on the part of governments to trick and coerce individuals into an action they do not want to take. Therefore, thoughtful talking about how to 'mitigate the risk' is the equivalent of negotiating with kidnappers over the ransom, when the right answer is: no coercion. The answers to these questions should be that those who want them opt-in, not forcing risk on everyone.
I'm more than old enough for anything and I have never been 'carded' in my life. In fact I rarely carry ID anyway (even though it's mandatory). Not going to start now.
Unfortunately, the VPN experience has been deteriorating quickly as BigCo and BigGov have been catching up in natural escalation.
moreover, it's already fairly common for web service operators to proactively block/shadowblock swaths of VPS ranges.
Ah damn. I was hoping that would be a good fallback.
I get why some sites use these kinds of IP filtering, but the net result is sadly bad for anyone trying to do this.
It's taboo in our culture to say this, but what keeps me up isn't just what people are afraid of; it's how far they’ll go to feel safe. That’s how monsters get made.
We’ll trade away the last scraps of online anonymity and build a legally required censorship machine, all for a promise of safety that's always just out of reach. And that machine sticks around long after anyone remembers why it was built, ready to be turned on whoever’s out of favor next, like a gun hanging above the door in Act One.
But say this out loud and suddenly you're the extremist, the one who "doesn’t care about kids." We’re already past the point where the "solution" is up for debate. Now you just argue over how it'll get done. If you actually question the wisdom of hanging surveillance over the doorway of the internet, you get boxed out, or even labeled dangerous.
It's always like this. The tools of control are always built with the best intentions, then quietly used for whatever comes next. History is clear, but polite society refuses to learn. Maybe the only real out-of-the-box thinking left is not buying the story in the first place.
Even with an effective implementation via something like zero knowledge proofs? It seems like it's entirely reasonable to say your position is (in this hypothetical) objectively wrong?
Like arguing that even if we know firefighters save lives, you'd still be against it, because "fear and the desire to feel safe are how monsters are made".
I disagree with these policies (because they aren't safe and I disagree that children in a danger best prevented through this kind of measure), but I also disagree with you vehemently. If I'm wrong and we can genuinely prevent harm and the worst cost is an inconvenience (again, without the risk of data leak), then I'm wrong and we should do it.
Not hard to imagine that kids in North Korea are exposed to less web porn. Doesn’t mean we want to live in NK.
The hypothetical that was brought up stops this from being an opinion and moves it into plain fact territory. If you can prevent harm with no downside, doing nothing is not an opinion, it's pretty clearly just immoral.
They brought up the hypothetical that if it empirically proven that it is true.
And I explicitly said I don't think that's the case.
Even TFA recognizes that a good zero knowledge proof system doesn't eliminate all risk; it just reduces and shifts it.
Someone is still controlling the execution of this proof. It's possible to deny people access to gated information. It's not about protection. It is about control.
Particularly as more of society moves from physical to virtual.
When you show a bartender your ID to buy a beer they generally don’t photocopy it and store it along with your picture next to an itemized list of every beer you’ve ever drank
This whole premise is absurd. There is tons of research and empirical and historical evidence that living in a surveillance state stifled free expression and thus narrows the richness of human creation and experimentation.
How old are you that you think constant surveillance is any kind of way to live? It's a thin gruel of a life.
But it’s the only one they will ever know.
However, I'm not against the concept of age verification, and believe it can be done well.
How old are you, to make that last comment just because you need your ID to buy a beer?
You keep making this comparison, but it's not appropriate. The closest real-world analogy: in order to buy alcohol, you need to wear a tracking bracelet at all times, and be identified at every store you enter, even you you choose to purchase nothing. If our automated systems can't identify you with certainty, you'll be limited to only being able to do things a child could do.
And the real world has a huge gap between a child and an adult. If an 8-year old walked into Home Depot and bought a circular saw, there's no law against it, but the store might have questions. If a 14-year old did it, you might get a different result. At 17, they'd almost certainly let you.
The real world has people that are observing things and using judgement. Submitting to automated age checks online is not that.
So next, we better make the devices age gate their users with attestation and destroy people's ability to use open operating systems on the web. Maybe for good measure we tell ISPs to block any traffic to foreign sites unless the OS reports that attestation.
But people are using VPNs to bounce traffic to other countries anyway, so now we need to ban those. But people still send each other porn over encrypted channels so we need to make sure encrypted platforms implement our backdoor so we can read it all, on top of on-device scanning which further edges out any open source players left in the game.
At what point do we stop chasing the rabbit?
If you take a step back, they are _very_ different, in myriad ways. But to answer your question very concretely: because we're turning the web into a "Paper's Please" scenario and the analogy with "I'm 12 but I can't walk into this smoke shop" doesn't hold. I shared a story on HN that didn't take off about how Google is now scanning _all_ YouTube accounts with AI, and if their AI thinks you're underage, your only recourse after the "kid limit" your account is to submit a government-issued ID to get your account back.
This has nothing to do with buying cigarettes and alcohol. This is about identifying everyone online (which advertisers would be thrilled about), and censoring speech. In short, the mechanisms being used online are significantly more intrusive than anything in the real world.
However, I think tech people risk losing this battle by saying (it seems to me, and in the post I originally replied to) "any attempt at any age checking on the internet is basically 1984", rather than "we need some way of checking some things, keeping people's privacy safe, this current system is awful."
Of course, if some people believe the internet should be 100% uncensored, no restrictions, they can have that viewpoint. But honestly, I don't agree.
This doesn't require any of the draconian 1984 measures that folks are insisting upon. The problem is there is no real incentive to implement true age-verification in this manner (this is why nobody has deployed ZKP), but rather to identify everyone. So while it would be ultra easy to imagine an onboarding scenario during device setup that asks:
1. Will this device be assigned to a child?
2. Supply the age so we can track when they cross over 18
3. Automatically reject responses with the adult header and lock down this setting
But Google and Apple won't do that, because they don't care, and the politicians won't bake it into their laws, because they don't are either: their goal is to alter culture, and protecting children is just an excuse.
Good example would be EU's proposed "chat control" regulation. Wiretapping every (even encrypted) channel for off chance illegal material might be shared.
It's really wild. Imagine a hypothetical ideal implementation. ZKP. No privacy issues, completely safe. And yet people are STILL against it. I can understand pro privacy advocates but I really don't know what kind of person would think this.
What's extra wild is there's no justification given for this in your comment. There's some completely unrelated stuff about censorship and anonymity. The point per headline is no privacy issues, you get to keep your privacy.
You say it yourself, it's hypothetical. In reality it will be one in a way that enables all kinds of abuse and security issues.
In my dataset of free-speech limiting examples for safety reasons, 89% eventually expanded in scope to limit speech relating to LGBTQ, feminist, women's health, and politics. This isn't a hypothetical - it happens over and over again. Each time we have folk pointing this out, and each time we have people saying, "You're overreacting."
ZKP or not, if you make Chekhov's gun, someone's going to use it. Privacy isn’t the point. Unless your ZKP also magically prevents scope creep and political misuse, hard pass from me.
It’s like going to the gunsmith and saying, "Don’t build Chekhov another gun, you know it’s going to go off," and he just shrugs and says, "There’s no way it happens a twentieth time."
Not to be overly pessimistic, but I'd say tools of control are only occasionally built with the best intentions. Normally they are built with, though maybe not the worst but certainly bad intentions. Good intentions are the marketing spin that comes after the fact to ease adoption like lube on blunt object headed in your nether regions.
- pornhub wouldn't know you used your corp ID
we got as far as a demo out of it but never commercialized as far as I know.
this was after there was a trial project with the UK about ZKP based age verification as kinda the next step where you could verify more than your age online.
I also worked on a similar system in 2015, which provided anonymity and unlinkability in almost all interactions (you don't know who it is, and you don't know whether the anonymous user is the same one you saw last time).
You did have to pay for the service of course, but it issued blind signature tokens for access (similar to what is described in the article). So the service did not know who actually did what.
It could also provide anonymous attestation of some attribute (like age). This was a bit more efficient and secure in that you did not need to store a bunch of tokens. It could transform the proof to be unique each time (thus giving unlinkability). It would only work if you had access to your private keys (so you could not just give your age proof token to a kid - you would have to give them your entire account and keys).
The next question is if that will work at all. Those that want to find it - they will. If that is true, then why is this verification in place at all?
You cannot separate the social context from the technical problem; or pretend that if you've designed a cryptographic protocol in some Platonic model reality, you've also solved some real problem in the real world. These things are privacy footguns because people want them to be privacy footguns—they're constructed that way, intentionally. The lack of privacy, the deterrent potential of public shaming, is a desired feature for many of the people pushing these things.
The error is in assuming that privacy is a common, shared value people agree on—a starting point for building technical solutions to. It isn't. It's an ideological dividing line.
[0] https://www.eff.org/deeplinks/2025/07/you-went-drag-show-now... ("You Went to a Drag Show—Now the State of Florida Wants Your Name ")
[1] https://apnews.com/article/florida-drag-show-law-vero-beach-... ("Florida’s attorney general targets a restaurant over an LGBTQ Pride event")
> "Just like the Kids Online Safety Act (KOSA) and other bills with misleading names, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy... EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database."
This "think of children" phenomenon[1] is a 0-day type phenomenon that is going to change the world a bit like early XP malware did.
1: https://twitter.com/AkkadSecretary/status/195031821425851616...
(Quoting myself from 2021: <https://news.ycombinator.com/item?id=26538052#26560821>)
LegionMammal978•16h ago
In principle, you could probably cook up some mechanism to prevent this. But then the information would also be irrevocable in case of error, which I doubt governments would accept. Not that ID verification is a foolproof proxy for the actual physical user in any case, short of invasive "please drink verification can"-style setups, which I worry might look tempting.
magicalhippo•14h ago
The gov't could threaten to revoke the license, but doing so would inconvenience all their users, not just the target. So the third party has leverage to dismiss the gov't.
Of course lots of factors in play, but should be at least a bit better than the gov't doing the age checks.
LegionMammal978•13h ago
progval•13h ago
Then it's up to legislators to make this illegal. Or at least restrict it to specific purposes, and with a judge's approval.