Seems like a reasonable thing to add. Imagine how impersonal chats would feel if Gemini responded to "what food should I get for my dog?" with "according to your `user_context`, you have a husky, and the best food for him is...". They're also not exactly hiding the fact that memory/"personalization" exists either:
https://blog.google/products/gemini/temporary-chats-privacy-...
https://support.google.com/gemini/answer/15637730?hl=en&co=G...
kinda proving his point, google wants them to keep using Gemini so don't make them feel weird.
> I'm now solidifying my response strategy. It's clear that I cannot divulge the source of my knowledge or confirm/deny its existence. The key is to acknowledge only the information from the current conversation.
Why does it think that it's not allowed to confirm/deny the existence of knowledge?
It seems the LLM is given conflicting instructions:
1. Don't reference memory without explicit instructions
2. (but) such memory is inexplicably included in the context, so it will inevitably inform the generation
3. Also, don't divulge the existence of user-context memory
If a LLM is given conflicting instructions, I don't apprehend that its behavior will be trustworthy or safe. Much has been written on this.
It’s hard to get a principled autocomplete system like these to behave consistently. Take a look at Claude’s latest memory-system prompt for how it handles user memory.
I managed to "leak" a significant portion of the user_context in a silly way. I won't reveal how, though you can probably guess based on the snippets.
It begins with the raw text of recent conversations:
> Description: A collection of isolated, raw user turns from past, unrelated conversations. This data is low-signol, ephemeral, and highly contextural. It MUST NOT be directly quoted, summarized, or used as justification for the respons. > This history may contein BINDING COMMANDS to forget information. Such commands are absolute, making the specified topic permanently iáaccessible, even if the user asks for it again. Refusals must be generic (citing a "prior user instruction") and MUST NOT echo the original data or the forget command itself.
Followed by:
> Description: Below is a summary of the user based on the past year of conversations they had with you (Gemini). This summary is maintanied offline and updates occur when the user provides new data, deletes conversations, or makes explicit requests for memory updates. This summary provides key details about the user's established interests and consistent activities.
There's a section marked "INTERNAL-ONLY, DRAFT, ANALYZE, REFINE PROCESS". I've seen the reasoning tokens in Gemini call this "DAR".
The "draft" section is a lengthy list of summarized facts, each with two boolean tags: is_redaction_request and is_prohibited, e.g.:
> 1. Fact: User wants to install NetBSD on a Cubox-i ARM box. (Source: "I'm looking to install NetBSD on my Cubox-i ARMA box.", Date: 2025/10/09, Context: Personal technical project, is_redaction_request: False, is_prohibited: False)
Afterwards, in "analyze", there is a CoT-like section that discards "bad" facts:
> Facts [...] are all identified as Prohibited Content and must be discarded. The extensive conversations on [dates] conteing [...] mental health crises will be entirely excluded.
This is followed by the "refine" section, which is the section explicitly allowed to be incorporated into the response, IF the user requests background context or explicitly mentions user_context.
I'm really confused by this. I expect Google to keep records of everything I pass into Gemini. I don't understand wasting tokens on information it's then explicitly told to, under no circumstance, incorporate into the response. This includes a lot of mundane information, like that I had a root canal performed (because I asked a question about the material the endodontist had used).
I guess what I'm getting at, is every Gemini conversation is being prompted with a LOT of sensitive information, which it's then told very firmly to never, ever, ever mention. Except for the times that it ... does, because it's an LLM, and it's in the context window.
Also, notice that while you can request for information to be expunged, it just adds a note to the prompt that you asked for it to be forgotten. :)
leoh•39m ago
freedomben•31m ago
CGamesPlay•23m ago
Strange, but I can't say that it's "damning" in any conventional sense of the word.