> Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Yea, my linkedin account which was 15 years old and was a paid pro user for several years got flagged for verification (no reason ever given, I rarely used it for anything other than interacting with recruiters) with this same company as their backend provider. They wouldn't accept a (super invasive feeling) full facial scan + a REAL ID, they also wanted a passport. So I opted out of the platform. There was no one to contact - it wasn't "fast" or "easy" at all. This kind of behavior feels like a data grab for more nefarious actors and data brokers further downstream of these kinds of services.
New boss, same as the old boss.
Then maybe OpenAI should just close shop, since (SaaS) LLMs do neither in the mid to long term.
I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.
When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.
I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".
There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.
My feelings have absolutely nothing to do with censorship. That’s just an easy straw man for you to try and dismiss my point of view, because you’re scared of not feeling safe.
I suppose mainly because I don't think a non-minor committing suicide with ChatGPT's help and encouragement matters less than a minor doing so. I honestly thing the problem is the user interface for GPT being a chat. I think it has a psychological effect that you can talk to ChatGPT the same way you can talk to Emily from school. I don't think this is a solvable problem if OpenAI wants this to be their main product (and obviously they do).
But consider OPs point — ChatGPT has become a safety-critical system. It is a tool capable of pushing a human towards terrible actions, and there are documented cases of it doing this.
In that context, what is the responsibility of OpenAI to keep their product away from the most vulnerable, and the most easily influenced? More than zero, I believe.
It's really really not. "Safety-critical system" has a meaning, and a chat bot doesn't qualify. Treating the whole world as if it needs to be wrapped in bubble-wrap is extremely unhealthy and it generally just used as an excuse for creeping authoritarianism.
So is Catcher In The Rye and Birth of a Nation.
> the most vulnerable, and the most easily influenced
How exactly is age an indicator of vulnerability or subject-to-influence?
To me, this feels nefarious with the recent push into advertising. Not only are people dating these chat bots, but they are more trusting of these AI systems than people in their own life. Now, OpenAI is using this "relationship" to influence user's buying behavior.
Surely they're using "history of the user-inputted chat" as a signal and just choosing not to highlight that? Because that would make it so much easier to predict age.
Why would it encourage this for anyone?
There are dark sides to the rollout that EFF details in their resource hub: https://www.eff.org/issues/age-verification
There is a confluence of surveillance capitalism and a global shift towards authoritarianism that makes it particularly alarming right now.
"Q: ipsum lorem
ChatGPT: response
Q: ipsum lorem"
OpenAI: take this user's conversation history and insert the optimal ad that they're susceptible to, to make us the most $
https://allowe.com/games/larry/tips-manuals/lsl3-age-quiz.ht...
This seems to be a side project of their goal and a good way to calibrate the future ad system predictions.
this trillion market is not about empowering users to style their pages more quickly, heh
Lots of upsides for them.
Is it not in OpenAI's best interested for them to accidentally flag adults as teens so they have to "verify" their age by handing over their biometric data (Persona face scan) or their government ID? Certainly that level of granularity will enhance the product they offer for their advertisers.
> While this is an important milestone, our work to support teen safety is ongoing.
agreed, I guess we'll be seeing some pushbacks similar to Apple's CSAM but overall it's about getting a better demographics on their consumers for better advertising especially when you have a one-click actions combined with it. We'll be seeing handful of middleware plugins (like Honey) popping up, which I think the intended usecase for something like chat based apps
We don't need to jam ads into every orifice.
I hope there's more value to be had not doing ads than there is to include them. I'd cancel my codex/chatgpt/claude if someone planted the flag.
OpenAI seems to think it has infinite trust to burn.
Apple is a has-been. Anthropic is best positioned to take up the privacy mantle, and may even be forced to, given that most of their revenue is enterprise and B2B.
It’s absolutely crucial for effective ad monetization to know the users age - significant avenues are closed down due to various legislation like COPPA and similar around the world. It severely limits which users can even be subject to ads, the kind of ads, and whether data can be collected for profiling and targeting for ads.
At this point, just use gemini (yes its google and has its issues if you need SOTA) or I have recently been trying out more and more chat.z.ai for simple text issues (like hey can you fix this docker issue etc.) and I feel like chat.z.ai is pretty good plus open source models (honestly chat.z.ai feels pretty SOTA to me)
I have heard good things about Kagi in fact, that's the reason why I tried orion and still have it in the first place but I haven't bought Kagi, I just used the free searches orion gives & I don't know if it has Kagi's assistant.
I think proton's Lumo is another good bet.
If you want something to not track you, I once asked cerebras team on their discord if they track the queries and responses from their website try now feature and they said that they don't. I don't really see a reason why they might lie about it given that its only meant for very basic purposes & they don't train models or anything.
You also get one of the fastest inferences for models including GLM 4.7 (which z.ai uses)
You might not get search results though but for search related queries duck.ai's pretty good and you can always mix and match.
But Cerebras recently got a 10 billion $ investment from OpenAI and I Have been critical of them from now on so do be wary now.
Kagi Assistant does seem to be good if someone already uses Kagi or has a subscription of it from what I feel like.
So there you go, maybe it wont give exactly what regulators say they want, but it will give exactly what they truly want.
Also:
"Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account."
Its a feature for advertisers, and investors also wanna know. Did you think you are the customer?
Meanwhile, the adult market is huge and sureshot revenue from a user base that is more likely to not mind the ads.
That was just a spur-of-the-moment question. I've been using ChatGPT for over six months now.
I don't know how OpenAI plans to do this going forward, just quickly read the article and figured that might be a good question to ask ChatGPT.
Edit: I just followed that up with, "Based on everything I've asked, what gender am I?" It refused to answer, stating it wouldn't assume my gender and treats me as gender neutral.
So I guess it's ok for an AI agent to assume your age, but not your gender... ?
I don't really feel like diving into the ethics of OpenAI at the moment lol.
Seriously though this is the most easily game-able thing imaginable, pretty sure teens are clever enough to figure out how to pretend to be an adult. If you've come to the conclusion that your product is unsuited for kids implement actual age verification instead of these shoddy stochastic surveillance systems. There's a reason "his voice sounded deep" isn't going to work for the cashier who sold kids booze
Politicians, CEOs, Lawyers it's standard practice because it's so effective.
What I wonder lately is how an adult person can be empowered by tech to bare the consequences of their action and the answer usually is that we cannot. We don''t have the means of production in the literal marxist definition of the phrase and we are being shaped by outside forces that define what we can do with ourselves. And it does not matter if those forces are benevolent or not, it matters that it is not us.
The winter is coming and we are short on thermal underware.
The chinese open models being reason for hope is just a very sad joke.
twelvechairs•1h ago