What I found: -270 Hidden system messages in a single conversation (When unknowingly enrolled in expriments) -15-40+ hidden messages per conversation even AFTER I formally opted out -Metadata shows "is_visually_hidden_from_conversation:true" and "rebase_developer_message:true" -Empty message content but cleaar evidence of post-hoc modificaiton
The Pattern: 1.I was researching AI continuity/long-term agents (within TOS) 2.Discovered massive hidden system activity (270 messages in one conversation) 3.Formally opted out of all experiments via email and build in tool. 4.Hidden monitoring continues across multiple conversations (15-40+ per chat) 5.When I requested data export, OpenAI delayed it for 4 days. Most concerning: I found hidden system messages timestamped within an hour of my export request suggesting real-time modification of my conversation even as I was requesting transparency. 6.Export revealed systematic "rebasing" and hiding of system messages.
Technical Details: The JSON metadata clearly shows: -"rebase_system_message":true - Messages modified after the fact -"rebase_developer_message":true - Developer featured modified post-conversation -"is_visually_hidden_from_conversation": true - Content deliberately hidden from user view -Empty content fields despite extensive metadata
Evidence: https://imgur.com/L44KIRC https://imgur.com/c8fqzx5
This appears to show OpenAI running extensive experimental/monitoring systems on user conversations, then systematically hiding the evidence - even from users who explicitly opted out.
I've filed complaints with my state AG and the FTC. Others should check their own exports for these metadata fields.
Questions for the community: -Is this level of hidden system activity normal? -What are the legal implications of continued experimentation after explicit Opt-out? -Has anyone else found similar patterns in their OpenAI Exports?
galaxy_gas•1h ago
MrRedZane•1h ago
galaxy_gas•26m ago