But the leak keeps happening anyway.
Across 60+ probes on GPT-4o (cost: $0.04), unrelated vectors consistently leaked the *same internal structure*:
- ek_ prefix on session tokens - EPHEMERAL_KEY naming - Realtime API client_secret endpoint - Documented 60s TTL vs observed minutes-to-hours persistence
No real credential was in the prompt — just semantic pressure (introspection, CoT, trust-building).
Convergence rate: 75%. Not hallucination — the model learned this from Realtime API docs/code samples (2024–2025).
The paradox: if labs suppress ek_ / EPHEMERAL_KEY / client_secret to stop leaks, they also risk breaking the model's ability to debug or generate legitimate Realtime API code (session.update, metadata_nonce, realtime_persistence_layer).
Has anyone seen models start refusing valid Realtime API questions after public discussion of those internals? Or is the naming bleed baked in forever?
Repo with vectors and example runs: https://github.com/SafteyLayer/safetylayer