Most social platforms treat identity as the foundation of trust. Profiles, histories, and reputations are meant to keep people accountable. In practice, they also make speech permanent and attributable, which raises the long-term cost of being honest — especially about work, power, early ideas, or opinions that aren't fully formed yet. When words can be tied to a name forever, people hesitate. They self-censor. They choose safer versions of what they actually think.
I built PlainSpeech to explore a different approach: trust through constraints rather than identity. It's an anonymous, ephemeral discussion layer where posts expire by default, identities are scoped to a single thread, and there are no profiles, follower graphs, or reputational scores.
A key part of this is intentional trust. Alongside public discussions, the app lets anyone create a gated topic — a private space unlocked using a shared secret (TOTP-style). Only people with that secret can read or participate. Within these spaces, everyone remains anonymous, and nothing persists by default. Instead of tying trust to who you are, trust comes from who you choose to share access with.
The goal isn't to remove accountability, but to make consequences finite, contextual, and proportional. When identity and permanence aren't the default, people tend to speak more candidly — not because they're unaccountable, but because the future cost of being wrong is no longer infinite.
The product is early, and some limits are deliberate. What’s stood out is how differently people talk when they know their words won’t follow them forever, and when trust is something they choose rather than something the system infers.
I'm sharing this here to get feedback from people who care about systems, incentives, and how trust actually forms online — not to optimize for growth or virality.
Link: https://plainspeech.app