The goal is simple: the server should not be able to read your chats or access your LLM API keys, even if it’s fully compromised.
Most AI chat tools proxy everything through their backend in plaintext. We wanted something closer to a zero-knowledge design, like a password manager or Signal.
How it works:
- All messages, attachments, history, and API keys are encrypted on-device - The server only stores encrypted blobs - Prompts go directly from your browser/device to the model provider (BYOK), not through us - Native iOS + Android apps - Passkeys/WebAuthn support
For higher-risk deployments, you can optionally run the backend inside TEEs/enclaves for extra isolation and remote attestation, so even the infrastructure operator can’t inspect memory.
Stack: React, Hono + tRPC, PostgreSQL, Bun.
We built this because we use multiple providers (OpenAI, Anthropic, Ollama, etc.) and didn’t want prompts logged or keys sitting on someone else’s server.
GitHub: https://git.new/onera Hosted version: https://onera.chat (free during alpha) iOS: https://apps.apple.com/in/app/onera-private-ai-chat/id675812...
Would love feedback on the design or threat model. Happy to answer questions.
cranberryturkey•1h ago