I’m building MyEverly, an AI thought companion designed for people who want to think more clearly rather than simulate a relationship.
The core idea is simple: many AI companions optimize for emotional dependency, persona bonding, or long-term memory accumulation. I wanted to explore the opposite direction.
MyEverly is built around three constraints:
1. *Privacy first by default* - No accounts required
2. Thought partner, not replacement - The system is tuned for reflective dialogue, clarification, and perspective-testing - The goal is to reduce emotional load, not become the emotional center
3. Ephemeral by design - Sessions are lightweight and disposable - AI should help organize thinking, not colonize it
Technically, this has meant making tradeoffs that go against typical engagement metrics.
I’m especially interested in feedback from people who have strong opinions about: - Where AI companions should behave like - Whether “memory” in AI systems is overrated or ethically risky - How much agency users should have over conversational persistence
I don’t think this is the future of relationships. I do think it’s a healthier shape for AI assistance.
Happy to answer questions or hear criticism.