I wonder how far back this has been going on. Did ICQ, IRC server hosters, BBSes do similar things?
It wasn’t until around 2014 that I stopped building routes that did:
DELETE FROM <table> WHERE id = ? ON DELETE CASCADE;
If I ask for search.brave.com to give me a list of gini coefficients for the top ten countries by GDP, it can't do it. However, if I tell it the data is available on the CIA world factbook, it can then spit that info out promptly. However, if I close the context and ask again, it hasn't learned this information and once again is unable to provide the list.
It didn't datamine me. It had no better idea where to find this information the second time I asked. This is the experience others have stated with other AIs as well. It does not seem special to brave.
I'm not expecting instant. Even next week it won't be there. It's like how AI never learned to count how many times the letter r appears in strawberry. Like sure, now if you ask brave, it will tell you three, but that is only because that question went viral. It didn't "learn" anything, it was just hard coded for that particular answer. Ask it how many times the letter l appears in smallville and it will get it wrong again.
Thanks, that was enlightening.
https://news.ycombinator.com/item?id=44778764
Interesting how much traction
"[x] Make this chat discoverable (allows it to be shown in web searches)"
gets in news articles.People don't seem to have the same intuition for the web that they used to!
Just as a PSA - there's nothing unique to AIs here - whenever you ask a question of anyone, in any way, they then have the memory of you having asked it. A lot of sitcoms and comedic plays have the plot premise build upon such questions that a person voiced then eventually reaching (either accurately or inaccurately) the person they were hiding the question from.
And as someone who's into spy stories, I know that a big part of tradecraft is of formulating your questions in a way that divulges the least about your actual intentions and current information.
If anything, LLM-driven AIs are the first technology that in principle allow you to ask a complex question that would be immediately forgotten. The thing is that you need to be running the AI yourself; if you ask an AI controlled by another entity, then you're trusting that entity with your question, regardless of whether there's an AI on the way.
It may seem obvious, but Sam Altman also recently emphasized that the information you share with ChatGPT is not confidential, and could potentially be used against you in court.
[1] https://www.pcmag.com/news/altman-your-chatgpt-conversations...
[2] https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no...
It would be weird for him not to be transparent about that
Oh, nice idea. We should all ask that.
Lemee ask ShatGPT how to do that!
https://privacy.anthropic.com/en/articles/10023555-how-do-yo...
> We do not actively set out to collect personal data to train our models
The 'snarky tech guy' tone of the article is a bit like nails on a chalkboard.
Otherwise, I use local for complex for potentially controversial questions.
What people don't want to do is sign up for yet another subscription. There's immense subscription fatigue among the general population, especially in tough economic times such as now.
roscas•2h ago
But not just AI bots or interfaces. Everything is saved and never deleted.
Remember Facebook? "We will never delete anything" that is their business.
So anything that you put on those "services" is gone out of your hands. But we still have an option, is to stop using these ads company and let them die.
Back to AI, there are loads of offline models we can use. Many like Ollama that will even download it. Install Ollama, on the ollama site find a model name and "ollama run model-name" and you can use it.
Ok, it is not as chatgpt5 but it can help you so much, that you might not even need chatgpt.
notpushkin•1h ago
You mean “remind”?
Phemist•1h ago
lowwave•21m ago
lm28469•1h ago
SoftTalker•1h ago
mgh2•53m ago
BolexNOLA•13m ago