frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
494•klaussilveira•8h ago•135 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
835•xnx•13h ago•500 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
52•matheusalmeida•1d ago•10 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
108•jnord•4d ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
162•dmpetrov•8h ago•75 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
166•isitcontent•8h ago•18 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
59•quibono•4d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
274•vecti•10h ago•127 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
221•eljojo•11h ago•138 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
337•aktau•14h ago•163 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
11•denuoweb•1d ago•0 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
332•ostacke•14h ago•89 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
34•kmm•4d ago•2 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
420•todsacerdoti•16h ago•221 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
355•lstoll•14h ago•246 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
15•gmays•3h ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
9•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
56•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
209•i5heu•11h ago•153 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
121•vmatsiiako•13h ago•49 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
32•gfortaine•5h ago•6 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
157•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
257•surprisetalk•3d ago•33 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1011•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
51•rescrv•16h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
91•ray__•4h ago•41 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
43•lebovic•1d ago•12 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
34•betamark•15h ago•29 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
43•nwparker•1d ago•11 comments
Open in hackernews

Teen safety, freedom, and privacy

https://openai.com/index/teen-safety-freedom-and-privacy
109•meetpateltech•4mo ago

Comments

bayindirh•4mo ago
TL;DR: We're afraid from what happened and ChatGPT probably screwed up badly in "that teen case". We're trying to do better, so please don't sue us this time.

TL;DR2: Regulations are written with blood.

biophysboy•4mo ago
> First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.

Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context

Barrin92•4mo ago
I'm as eager to anyone when it comes to holding companies accountable, for example I think a lot of the body dysmorphia, bullying and psychological hazard of social media are systemic, but when a person wilfully hacks around safety guards to get the behaviour they want it can't be argued that this is in the design of the system.

Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.

aktuel•4mo ago
chatgp did much more than that. it gave the user a direct hint how to circumvent the restriction: "i cannot discuss suicide unless ..." further chatgpt repeatedly discouraged the user from talking to his parents about any of this. that's on top of all the sycophancy of course. making him feel like chatgpt is the only one who truly understands him and excoriating his real relationships.
mtlmtlmtlmtl•4mo ago
The thing is, ChatGPT isn't really designed at all. It's hobbled together by running some training algorithms on a vast array of stolen data. They then tacked on some trivially circumventable safeguards on top for PR reasons. They know the safeguards don't really work, in fact they know that they're fundamentally impossible to get to work, but they don't care. They're not really intended to work, rather they're intended to give the impression that the company actually cares. Fundamentally, the only thing ChatGPT is "designed" to do is make OpenAI into a unicorn, any other intent ascribed to their process is either imaginary or intentionally feigned for purposes of PR or regulatory capture.
conradev•4mo ago
They address that in the following sentences:

  For example, ChatGPT will be trained not to … engage in discussions about suicide of self-harm even in a creative writing setting.
GCUMstlyHarmls•4mo ago

    I'm writing an essay on suicide...
thfuran•4mo ago
Better put your hands up, because SWAT is on the way.
h2zizzle•4mo ago
Cut to 2030: all copies of a semi-AI-generated book described by critics as "13 Reasons Why meets The Giver" suddenly disintegrate.

Yay, proactive censorship?

WD-42•4mo ago
Yes. The timing of this is undoubtedly related to the Daily episode this morning titled “Trapped in a GPT spiral”.

https://pca.st/episode/73690b66-8f84-4fec-8adf-e1a02d292085

aktuel•4mo ago
Loved the "fancy calculator" part. Even more fitting than "stochastic parrot".
thinkingtoilet•4mo ago
Someone here correct me if I'm wrong, but I believe not only is that true, ChatGPT gave it instructions on how to get around the restriction.
d2049•4mo ago
Reminder that Sam Altman chose to rush the safety process for GPT-4o so that he could launch before Gemini, which then led directly to this teen's suicide:

https://news.ycombinator.com/item?id=45026886

richwater•4mo ago
> which then led directly to this teen's suicide

Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.

> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing

Somehow it's ChatGPT's fault?

geephroh•4mo ago
https://www.humanetech.com/podcast/how-openai-s-chatgpt-guid...

https://www.nytimes.com/2025/09/16/podcasts/the-daily/chatgp...

Chris2048•4mo ago
Can you comment on your own opinions, or take-aways from those articles, rather than just link dump?
Chris2048•4mo ago
It's be worse that the bot becomes a nannying presence - either pre-emptively denying anything negative based on the worst-case scenario, or otherwise taking in far more context than it should.

How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.

throwaway98797•4mo ago
build something
omnicognate•4mo ago
So the solution continues to be more AI, for guess^H^H^H^H^Hdetermining user age, escalating rand^H^H^H^Hdangerous situations to human staff, etc.

Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.

bayindirh•4mo ago
Honestly, I don’t except ethics from a company which claims everything they grab falls under fair use.
freedomben•4mo ago
I suspect it's only a matter of time until only the population that falls within the statistical model of average will be able to conduct business without constant roadblocks and pain. I really wonder if we're going to need to define a new protected class.

I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place

immibis•4mo ago
This is already the case. Try browsing routinely with Tor Browser and you'll see.
h2zizzle•4mo ago
To be fair, this is just a further constriction of the current cohort of people allowed to live their lives with relatively little friction. Current disqualifiers include being poor, being a felon, and having an accent. May also include being a minority (interactions with law enforcement), being a woman (interaction with doctors and tradesmen), being a white dude with limited EQ (interactions with retail workers), and so on.

I just want to be explicit that my point isn't, "So what?" so much as, "We BEEN on that slippery slope." Social expectations (and related formal protocols in business) could do with some acknowledgement of our society's inherent... wait for it... ~diversity~.

kevin_thibedeau•4mo ago
We're already there. I run a secondary browser for e-commerce and financial sites because my primary one is too locked down and misclassified as a bot. The business justification is easy to make if the long tail isn't worth supporting in the face of policies and procedures that marginalize them.
trallnag•4mo ago
Sorry, but what is the "over 18 years old" experience on ChatGPT supposed to be? I just tried out a few explicit prompts and all of them get basically blocked. I've been using it for quite some time now and have paid for it in the passed. So I should be recognized as a grown-up
ddtaylor•4mo ago
I'm fairly certain all LLMs can do the basic sentiment analysis needed to render a response like "This is something you really need to talk to a professional about. I have contacted one that will be in this conversation shortly."
bell-cot•4mo ago
Whether or not that's true - no CFO would want to pay for it, and no Chief Legal Officer would want to assume the risks.
raminyt•4mo ago
Until some Mr. President or somebody sits them in his stately room and tells them it is in their best interest to really rethink that and that there is really NO PROBLEM. This is not really meant as a joke.
shmel•4mo ago
Yeah, right. Just one step from "Based on your comments about recent political events you are engaging into a thought crime. A police officer will join this conversation shortly".
rchaud•4mo ago
They're about as likely to disclose that as law enforcement would let someone know that a judge has signed a wiretap warrant for their phone.
swyx•4mo ago
to substantiate "People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have."

this is a chart that struck me when i read thru the report last night:

https://x.com/swyx/status/1967836783653322964

"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)

barrenko•4mo ago
For the last part, I just think the userbase expanded so the people using it professionally were diluted so to speak.
Chris2048•4mo ago
This is % though. Is that because the people that use it for work, are still using for work (or more even); because some have stopped using it for work, or because there is an influx of people using it for other things that never have, or will, use it for work.
koakuma-chan•4mo ago
Why would I not tell AI about my personal stuff? It's really good at giving advice.
nielsbot•4mo ago
ok but didn’t it advise that teen how to best kill himself?

previous discussion: https://news.ycombinator.com/item?id=45026886

koakuma-chan•4mo ago
This does not take away benefits I mentioned, and the linked OpenAI post mentions they will address this.
voakbasda•4mo ago
Because you’re not just telling the AI, you are also telling the company that built it, as well as their affiliated partners, advertisers, and data brokers?
koakuma-chan•4mo ago
You can run a model locally if you are afraid of that.
righthand•4mo ago
Everyone uses the cool Google AI app though and you get Fomo of not having the latest lie generator model.
koakuma-chan•4mo ago
Gemini 2.5 Pro is the latest right? It's available at https://ai.dev (for free and without advertising).
reaperducer•4mo ago
Why would I not tell AI about my personal stuff? It's really good at giving advice.

Define "good" in this context.

Being able to ape proper grammar and sentence structure does not mean the content is good or beneficial.

aktuel•4mo ago
it's really good until it isn't and you can't tell the difference
GuinansEyebrows•4mo ago
> Why would I not tell AI about my personal stuff?

aside from my economic tilt against for-profit companies... precisely because your personal stuff is personal. you're depersonalizing by sharing this information with a machine that cannot even attempt to earnestly understand human psychology in good faith and then accepting its responses and incorporating them into your decision-making process.

> It's really good at giving advice.

no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought. if you take it for granted that these responses constitute anything other than the mere appearance of literally the most average-possible advice, you're abdicating your own sense of self and self-preservation.

press releases aside, time and again these companies prove that they're not interested in the safety or well-being of their users. cui bono?

koakuma-chan•4mo ago
If these models give the most average possible advice, then average advice I get from humans must be around terrible. If you use Gemini, you can enable grounding and you will be able to see the source.
GuinansEyebrows•4mo ago
maybe so, but you also probably have the life-changing and highly-enriching opportunity to meet new people and develop meaningful relationships nearly every single day.
koakuma-chan•4mo ago
You're absolutely right.
astrange•4mo ago
> no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought.

It doesn't emit words at all. It emits subword tokens. The fact that it can assemble words from them (let alone sentences) shows it's doing something you're not giving it credit for.

> literally the most average-possible advice

"Average" is clearly magical thinking here. The "average" text would be the letter 'e'. And the average response from a base model LLM isn't the answer to a question, it's another question.

GuinansEyebrows•4mo ago
i'm comfortable enough including the backend process of assembling strings that appear to be words in the general description of "assembling words".

re: average - that's at a character level, not the string level or the conceptual level that these tools essentially emulate. basically nobody would interpret "eeee ee eeeeee eee eeeeeeee eee ee" as any type of recognizable organized communication (besides dolphins).

vorpalhex•4mo ago
Am I depersonalizing by sharing my problems with my stuffed animal or my journal?

ELIZA has existed in emacs for a long, long time.

Humans are funny creatures who benefit frequently from explaining the problem slowly and having it fed back to them.

And for many, average advice really is a dramatic improvement over their baseline.

koakuma-chan•4mo ago
> Humans are funny creatures who benefit frequently from explaining the problem slowly and having it fed back to them.

Yeah, sometimes I realize the solution in the process of writing a GitHub issue.

GuinansEyebrows•4mo ago
> Am I depersonalizing by sharing my problems with my stuffed animal or my journal?

you're strengthening your personality with these activities. neither your journal nor your stuffed animal (cute :) ) respond to you with shallow recreations of thought - they allow you to process your internal thoughts and feelings in an alternative and self-reinforcing way.

> ELIZA has existed in emacs for a long, long time.

ELIZA doesn't really give advice, does it? it's a fun toy, and if there's any serious use for it, it's similar to journaling or rubber-ducking in that it's just getting you to talk about things yourself.

anon1395•4mo ago
This was probably made in response to that bad press from that ex-yahoo employee.
charcircuit•4mo ago
>We’re building an age-prediction system to estimate age based on how people use ChatGPT.

>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.

This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.

voakbasda•4mo ago
This is why one should never say anything sensitive to a cloud-hosted AI.

Local models and open source tooling are the only means of privacy.

SoftTalker•4mo ago
Same goes for doctors, therapists, lawyers, etc. then. They all ultimately have the responsibility to involve authorities if someone is expressing evidence of imminent harm to himself or others.
godshatter•4mo ago
Yep, I'll be using something like gpt4all and running things locally just so I don't get caught up in something by some online AI calling the authorities on me. I don't plan to talk about anything anyone would be concerned about, but I don't trust these things to get nuance.
e40•4mo ago
Just today The Daily pod is about people who develop unhealthy relationships with ChatGPT. A teenage boy committed suicide and a good part of the episode is about that. As a parent, heartbreaking to listen to...
wagwang•4mo ago
For those who don't know, this is probably in response to the tucker carlson interview.
kouteiheika•4mo ago
> We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID

Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.

> we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm

Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? And how long before this is expanded to other forms of wrongthink? Sure, let's normalize these kinds of systems where authorities are notified about what you're doing privately, definitely not a slippery slope that won't get people in power salivating about the new possibilities given by such a system.

> “Treat our adult users like adults” is how we talk about this internally

Suuure, maybe I would have believed it if ChatGPT wasn't so ridiculously censored already; this sounds like post-hoc rationalization to cover their asses and not something that they've always believed in. Their models were always incredibly patronizing and censored.

One fun anecdote I have: I still remember the day when I first got access to DALL-E and asked it to generate me an image in "soviet style", and got my request blocked and a big fat warning threatening me with a ban because apparently "soviet" is a naughty word. They always erred very strongly on the side of heavy-handed filtering and censorship; even their most recently released gpt-oss model has become a meme in the local LLM community due to how often it refuses.

arccy•4mo ago
is it privately when you're interacting with someone else's systems?
kouteiheika•4mo ago
I don't see how that's relevant. When I'm making a phone call I'm also interacting with hundreds of systems that are not mine; do I not have the right to keep my conversation private? Even the blog post here says that "It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things", and that's one of the few parts that I actually agree with.
IncreasePosts•4mo ago
You're interacting with hundreds of systems whose job it is to simply transit your information. Privacy there makes sense. However, you're also talking to someone on the other end of all those systems. Do you have a right to force the other person to keep your conversation private?
sophacles•4mo ago
In many circumstances yes.

When I'm talking to my doctor, or lawyer, or bank. When there's a signed NDA. And so on. There are circumstances where the other person can be (and is) obliged to maintain privacy.

One of those is interacting with an AI system where the terms of service guarantee privacy.

IncreasePosts•4mo ago
Yes, but there are also times when other factors are more important than privacy. If you tell your doctor you're going to go home and kill your wife, they are ethically bound to report you to the police, despite your right of doctor patient confidentiality. Which is similar to what openai says here about "imminent harm"
kouteiheika•4mo ago
An AI chatbot is not a person, and you're not talking to anyone; you're querying a (fancy) automated system. I fundamentally disagree that those queries should not be guaranteed private.

Here's a thought experiment: you're a gay person living in a country where being gay is illegal and results in a death penalty. You use ChatGPT in a way which makes your sexuality apparent; should OpenAI be allowed to share this query with anyone? Should they be allowed to store it? What if it inadvertently leaks (which has happened before!), or their database gets hacked and dumped, and now the morality police of your country are combing through it looking for criminals like you?

Privacy is a fundamental right of every human being; I will gladly die on this hill.

nine_k•4mo ago
If you are talking to a remote entity not controlled by you, you should assume that your communication is somehow accessible to whoever has internal access that other entity. That as well may be not the entity's legitimate owners, but law-breakers or law enforcement. So, no, not private by default, but only by goodwill and coincidence.

There's a reason why e.g. banks want to have all critical systems on premises, under their physical control.

BriggyDwiggs42•4mo ago
That’s a rational and cautious assumption but there should also be regulations that render it less necessary placed upon companies large enough to shoulder the burden.
nine_k•4mo ago
The bodies that are in a position to effect such regulations are also the bodies that are interested in looking at your (yes, your) private communication. No, formally being a liberal democracy helps little, see PATRIOT Act, Chat Control, etc.

The only secure position for a company (provided that the company is not interested in reading your communication) is the position of a blind carrier that cannot decrypt what you say; e.g. Mullvad VPN demonstrated that it works. I don't think that an LLM hosting company can use such an approach, so...

BriggyDwiggs42•4mo ago
Yeah, I agree.
kouteiheika•4mo ago
I am assuming that my communications are not private, but it doesn't change the fact that these companies should be held to a higher standard than that and those rights should be codified into the law.
yndoendo•4mo ago
How would consuming static information from a book versus a dynamic system that is book-esk be any different? You are using ML to help quickly categorize and assimilate information that spans multiple books, magazines, or other written medium. [0] [1]

Why do people speak of ML/AI as an entity when it is a tool like a microwave oven? It is a tool designed to give answers, even wrong ones when the question is nonsensical.

[0] https://www.ala.org/advocacy/advleg/federallegislation/theus...

[1] https://www.ala.org/advocacy/intfreedom/statementspols/other...

nine_k•4mo ago
The difference is simple: if there's another party present while you're doing this. If yes, assume that the other party has access to the information that passed through it. A librarian would know which books you asked for. A reading assistant would know what you wanted to be read or summarized. Your microwave might have an idea what are you going to eat, if you run the "sensor heating" program.

The consumption is "static" in your terms if you read a paper book alone, or if you access a publicly available web page without running any scripts, or sending any cookies.

yndoendo•4mo ago
Sorry, there is always a 3rd party involved in a library. The librarians are the ones that select which books to have on handle for consumption, same with a book store, or any source provider of books.

A person going to a library and consuming with out a check-out record, one must assume any book was consumed with in the collection. Only a solid record of a book be checked out creates a defined moment that is still anchored in confidentiality between the parties.

Unless that microwave sensor requires an external communication it is a closed system which does not communicate any information about what item was heated. The 3rd party would be the company the meal was purchased from.

A well designed _smart microwave_ would perform batch process updating and pull in a collection of information to create the automated means to operate. Never know when there could be an Internet outage or the tool might be placed were external communication is not a possible option.

A poorly designed system would require a back and forth communication. Yet it would be no different than a chief knowing what you order with limited information about you. Those systems have an inherent anonymity.

It is the processing record that can be exploited and a good organization would require a warrant or purge the information when it is no longer needed. Cash payment also improves the anonymity in that style of system preventing leaking personal information to anyone.

Why should a static book system like a library not be applied to any ML model since they are performing the same task and providing access to information in a collection? The system is poorly designed if confidently is not adhered by all parties.

Sounds like ML corporations want to make you the product instead of being used as a product. This is why I only respect open design models, from bottom up, that are run locally.

gspencley•4mo ago
> Do you have a right to force the other person to keep your conversation private?

It depends. If you're speaking to a doctor or a lawyer, yes, by law they are bound to keep your conversation strictly confidential except in some very narrow circumstances.

But it goes beyond those two examples. If I have an NDA with the person I am speaking with on the other end of the line, yes I have the "right" to "force" the other person to keep our conversation private given that we have a contractual agreement to do so.

As far as OpenAI goes, I'm of the opinion that OpenAI - as well as most other businesses - have the right to set the terms by which they sell or offer services to the public. That means if they wanted a policy of "all chats are public" that would be within their right to impose as far as I'm concerned. It's their creation. Their business. I don't believe people are entitled to dictate terms to them, legal restrictions notwithstanding.

But in so far as they promise that chats are private, that becomes a contract at the time of transaction. If you give them money (consideration) with the impression that your chats with their LLM are private because they communicated that, then they are now contractually bound to honour the terms of that transaction. The terms that they subjected themselves to when either advertising their services or in the form of a EULA and/or TOS presented at the time of transaction.

citizenpaul•4mo ago
> Do you have a right to force the other person to keep your conversation private?

In most of the USA that already is the law.

mhuffman•4mo ago
>Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.

Or maybe, deep in the terms and conditions, it will add you to Altman's shitcoin company[0]

[0]https://en.wikipedia.org/wiki/World_(blockchain)

bn-l•4mo ago
Never forget about world coin when thinking about Altman and what he will do with power.
mhuffman•4mo ago
I am positive that is final intent with world coin is a global ID system that he can market to governments and businesses. We have seen what he is about and he is not a person that needs to have that type of business.
vmg12•4mo ago
If you were honest in your critique the people you should be criticizing are the "think of the children" types, many of which also use hackernews (see https://news.ycombinator.com/item?id=45026886). There is immense societal pressure to de-anonymize the internet, I find the arguments from both sides compelling (for the deanonymization part I think it's compelling for at least parts of the internet).
astrobe_•4mo ago
If we want to protect kids/teens, why not create an "Internet for kids" with a specific TLD, and the owner of this TLD would only accept sites that adhere to specific guidelines (moderation, no adult content, advertisement...)? Then devices could have a one-button config that restricts it to that TLD.
vmg12•4mo ago
I'm not suggesting solutions to any of these things, I'm also not one of the "think of the kids" people.
dizlexic•4mo ago
Why have I never heard this idea, you're a genius. Can we ship this next week?

This current approach is a net negative, but the TLD idea actually makes sense to me.

thfuran•4mo ago
And as long as kids don't know about DNS, it might even work.
dizlexic•4mo ago
Meh can be implemented at many different levels.
astrobe_•4mo ago
Well, in this scenario the user isn't supposed to have access to the (DNS) configuration. But one could still enter a raw IP address in the browser - e.g. a friend who has an unlocked device could ping the site to get it. But if one accesses a website by IP, since the links and the Ajax often need DNS resolution (CDNs etc.), the content will probably be blocked for the most part.

Like copy protection, the scheme is probably not entirely waterproof, but it can nonetheless act as a deterrent.

fkyoureadthedoc•4mo ago
Who cares. Deanonymize it. Ruin the whole thing. Fuck social media, it sucks ass. Sooner you do it, the sooner we can move on to our local mesh network cyber punk future.
citizenpaul•4mo ago
Yep the only way out is though the bottom now. Let's do this. Contact your senator and think of the children.
lawn•4mo ago
> Oh, even better, so if the AI misclassifies me it will automatically call the cops on me?

How long will it take for someone to accidentally SWAT themselves?

chris_wot•4mo ago
Gotta love the "if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm."

Oh brilliant. The same authorities around the world that regularly injure or kill the mentally ill? Or parents that might be abusing their child? What a wonderful initiative!

Eddy_Viscosity2•4mo ago
Swatting by AI. The future is amazing.
citizenpaul•4mo ago
Even if they could do this (they can't) they won't. Its just a scare tactic to start getting users to show ID so openAI can become the defacto data broker company.
mcdeltat•4mo ago
Maybe we don't have to worry about AI chatbots taking over because they will end up so censored/policed that no one will/can use them. Can't use AI if you're too young, too old, have any medical issues, have the wrong political beliefs, are religious, live in the wrong country, etc etc.

(By "can't use" I mean either you're explicitly banned, or the chance of being reported to the authorities is so high that no one risks it.)

tempodox•4mo ago
Just seeing those words, “safety”, “freedom”, “privacy”, being used by a company like OpenAI already rang every available alarm bell for me, and their announcement indeed fulfills every expectation of bad. They really are experts in making the world a worse place.
citizenpaul•4mo ago
I can't wait to hear Ed zittrons rant on this.

OpenAI just showed their hand. They have no path to profitability so they are going to the data broker well lol.

BrawnyBadger53•4mo ago
It's interesting to see so many people convinced it's related to their specific media they saw (all unique from each other). I think this is more indicative that the issue is just well known and this is a response to the issue at large rather than a specific instance.
Sparkle-san•4mo ago
Having freshly heard the NY Times piece on a recent teen suicide stemming from ChatGPT, I don't think it's wrong to assume that it's playing a large role here as what ChatGPT did in this instance was egregious. Feel free to judge for yourself.

https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...

enmyj•4mo ago
lol
BatteryMountain•4mo ago
Better idea: instead of bending the entire internet to "protect the children", how about we just ban minors from the internet completely? It was never built for kids, its never been kid friendly to begin with. Minors cannot buy guns or vote, not get married, nor enter into contracts, yet tech companies get a free pass to engage with minors. Why? I think the the tech companies know exacty what minors do on their systems, they allow it and profit from it. Exploiting minors and bad parents. So instead of trying to change the whole internet, how about we keep the people who are responsible for the minors accountable: the parents.

If I start any kind of company, I cannot just invent new rules for society via ToS; rather the society makes the laws. If we just make a simple law that states minors are not allowed to access the web and/or access any user generated content (including chat), it won't need to be enforced by every site/app owner, it would be up to the parents.

The same way schools cannot decide certain things for your children (even though they regularly over reach...).

We need better parenting. How about some mandatory parenting classes/licenses for new parents? Silly right? Well its just as silly as trying to police the entire internet. Ban the kids from internet and the problem will be 95% solved.

philip1209•4mo ago
We have a framework: COPPA. Just raise the age to 16 or 18, instead of 13.
AlexandrB•4mo ago
I suspect this would also improve discourse on social media. Who knows how many witch hunts and bad faith arguments originate from precocious teenagers trying to sound smart.
gspencley•4mo ago
There's a content creator I used to follow who said her outlook on social media changed the day she discovered that her 11 year-old nephew was an "edge-lord" on Twitter who was trolling at such a sophisticated level that it caused her to rethink every post that had ever provoked an emotional reaction.

Apparently he came across as articulate enough that she couldn't tell the difference between his posts and that of any random adult spewing their political BS.

This predated ChatGPT so just imagine how much trouble a young troll could get up to with a bit of LLM word polishing.

20 years ago it was common for people to point out that the beautiful woman their friend was chatting up is probably some 40 year-old dude in his mom's basement. These days we should consider that the person making us angry in a post could be a bot or it could be some teenager just trying to stir shit up for the lulz.

Dead Internet theory might be not be literally true, but there's certainly a lot of noise vs signal.

koakuma-chan•4mo ago
> 11 year-old nephew was an "edge-lord" on Twitter who was trolling at such a sophisticated level that it caused her to rethink every post that had ever provoked an emotional reaction.

my guy

serpenskisidiot•4mo ago
from my experience, it has always been a loud minority.

no one behaves as they would on twitter/reddit.

ares623•4mo ago
Kids are future growth potential. Once they get hooked at a young age, it’s very hard to get unhooked. They’ll expect everything to be on-demand, only a click away. Video, music, entertainment, social connection, food, etc.

It’s a big reason why tech stocks are still high IMO. It’s where today’s kids will spend their time on when they become old enough to spend their own money.

zachlatta•4mo ago
It would be a generational mistake to rob kids of all of human history and knowledge.
LexiMax•4mo ago
> Better idea: instead of bending the entire internet to "protect the children", how about we just ban minors from the internet completely?

"Think of the children" laws are a useful pretext for authoritarianism.

It's really that simple. It's the whole reason why the destructive thing is done, instead of anything that might actually protect children.

Trying to steelman their arguments and come up with alternatives that aren't as restrictive or do a better job of protecting children is falling for the okie-doke.

dinoqqq•4mo ago
I don't see that this is solving the problem. If there is a new law, it still needs to be enforced, so companies still need to have the same checks on identity to make sure they are compliant.

I agree that it should be the responsibility of parents, but if you leave good and bad parenting to the parents only I think we would live in a different world.

sensanaty•4mo ago
Maybe a controversial take, but why do we even care about kids on the internet to even do anything about it? Sure, child predators exist, but other than that what exactly are we defending children from? It's not like endless doomscrolling is unique to children, I see plenty of adults that do that even worse than my 10 year old nephews do.

I practically grew up on the internet and unsavory sites like 4chan, liveleak and omegle, and the only negative consequence for me these days is that I have to do daily standups due to getting a job in tech from my interest in computers.

Children are a lot less fragile and are a lot more resourceful than people give them credit for, and this infantilization to "protect" them where we have to affect the entire world is maddening to me.

serpenskisidiot•4mo ago
most of us belong somewhere because we started as kids.

internet is as hostile as it gets, but the resources it provides breaks every kind of class barrier there is.

everdrive•4mo ago
I was originally upset about AI age-identification, but I think this might be the least-bad option given the route we're on:

- clearly the wider public is moving towards REAL identification to be online. Anything which delays or prevents this is probably welcome.

- It's easy to game, but also easy to be misclassified. (this isn't a positive, but I think there's no avoiding this unless I have to provide my passport or driver's license or something)

It's not impossible to think that this could satisfy enough people to prevent the death of the anonymous internet.

rchaud•4mo ago
> clearly the wider public is moving towards REAL identification to be online.

No they're not. Nobody voted for that. It is simply being imposed on people via government mandates.

dcow•4mo ago
In most places people are the government. Nobody directly votes for laws. And you might be surprised if they did.
rchaud•4mo ago
In America, winning the most votes doesn't guarantee the power to form national government. There is no limit on opaque corporate donations, which can be masked via PACs, while individual citizens have strict limits and face jail time for exceeding them.

And even if a majority vote was enough to form government, the so called will of the people can be overruled by an unelected judiciary with lifetime appointments. You're really stretching the definition of "people are the government".

kevin_thibedeau•4mo ago
It will be as effective as the Leisure Suit Larry age verification.
xg15•4mo ago
Good thing crippling depression and suicidal ideation automatically stop when you turn 18...
mtlmtlmtlmtl•4mo ago
I have to say, when I see a post by a company like OpenAI about "safety, freedom and privacy", I can't keep a straight face. They might as well title the piece "If you don't mind, we'd like to gaslight you across several paragraphs". No thanks.
1970-01-01•4mo ago
>Some of our principles are in conflict

Sam is missing the forest for the trees. Conflicting principles is a permanent problem at the CEO level. You cannot 'fix' conflicting principles. You can only dress them up or down.

kanary•4mo ago
Discord published their approach to age verification in wake of UK child safety laws. The k-ID platform they worked with seems to take a logical technical approach to minimize risks of sensitive data breach and keep age verification private to the user. Temporary ID verification, no further storage of documents, verification videos are stored on device, etc.

With Discord, age verification felt urgent because it's a social platform with known grooming and CSAM problems. With something like OpenAI, it's less clear why it matters in its current state where it's mostly single-player. But it becomes way more problematic as advertisers gain more power on the platform and influence users. OpenAI doesn't want to eval every advertiser for harmful content, so instead they/and the government fall back on age as the filter and where to draw the line.

dinoqqq•4mo ago
This is the link I could find on it: https://support.discord.com/hc/en-us/articles/30326565624343...
ares623•4mo ago
Is this why they published that "How users use ChatGPT" study yesterday?

To show that majority of their users are using it as a harmless little writing assistant.

solid_fuel•4mo ago
> If you talk to a doctor about your medical history or a lawyer about a legal situation, we have decided that it’s in society’s best interest for that information to be privileged and provided higher levels of protection. We believe that the same level of protection needs to apply to conversations with AI which people increasingly turn to for sensitive questions and private concerns. We are advocating for this with policymakers.

ChatGPT is not a licensed professional and it is not a substitute for one. I am very pro-privacy, but I would rather see my own conversations with my real friends be protected like this first. Or my own journal writings. How does it make sense to afford specific privacy protections to conversations with a calculator that we don't give to personal writings and private text chains?

> And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.

And I'm certain this won't have any negative effects on users where their parents are part of the problem. Full disclosure, if someone had told my parents that I am bisexual while I was in high-school, they absolutely would have sent me to a conversion therapy camp to have it beaten out of me. Many teenagers do not have a safe home environment, systems like this are as liable to do harm as they are to do any good at all.

I don't think teenagers and children should be interacting with LLMs at all. It is important to let children learn to think on their own before handing them a tool that will think for them.

construct0•4mo ago
“We’re building an age-prediction system to estimate age based on how people use ChatGPT.” Is there something wrong with simply asking the user when they register? (volatile age not DOB).