I’m working on OnionLab, an experimental ecosystem of services built around a strict constraint:
All user-facing services are accessible exclusively via Tor (.onion). Clearnet is used only to publish static, official information.
This is not because Tor is a “feature”, but because treating privacy as a hard constraint simplifies some design decisions while making others explicit.
By committing to Tor-only access, we intentionally avoid: - IP-based assumptions - user tracking or profiling - identity binding through accounts or social graphs
Instead, we focus on: - minimal and explicit state - short-lived session state only where strictly required - cryptographic verification (PGP) rather than identity claims - append-only records instead of mutable histories
One example is OnionLab Trust, which records references (e.g. URLs, onion services, external account identifiers) declared by PGP key holders.
Trust does not verify ownership, legitimacy, or truth. It only guarantees that a reference was registered or updated by the holder of a specific PGP private key.
The goal is not to create authority, but to allow others to observe continuity and intent over time without weakening anonymity.
I’m sharing this here not as a product launch, but as a concrete exploration of what service design looks like when privacy is treated as a non-negotiable requirement.
I’d be interested in hearing from people who have built Tor-only systems, or who considered this approach and decided against it.
Thanks for reading.