We open-sourced Echo Mode, a lightweight middleware for reducing tone/persona drift in long-running LLM interactions.
It works as a finite-state protocol (Sync / Resonance / Insight / Calm), maintaining consistent conversational “state” across turns.
Under the hood it tracks a Sync Score (think BLEU for tone) and uses EWMA drift detection + repair loops to restore consistency.
Written in TypeScript, works with OpenAI, Anthropic, Gemini, and other APIs.
The OSS version is free under Apache-2.0, and part of a larger system we’re developing at EchoMode.io to provide enterprise-grade tone stability.
teamechomode•1h ago
It works as a finite-state protocol (Sync / Resonance / Insight / Calm), maintaining consistent conversational “state” across turns. Under the hood it tracks a Sync Score (think BLEU for tone) and uses EWMA drift detection + repair loops to restore consistency.
Written in TypeScript, works with OpenAI, Anthropic, Gemini, and other APIs. The OSS version is free under Apache-2.0, and part of a larger system we’re developing at EchoMode.io to provide enterprise-grade tone stability.
GitHub: https://github.com/Seanhong0818/Echo-Mode
Would love feedback from others building long-context or multi-agent frameworks.