Dina is a personal AI with its own user-owned identity and encrypted persona vaults. Other agents can use Dina’s permission layer when they need access to sensitive data or want to take risky actions. Dina can ask for approval from the user if the action is deemed risky. But Dina is broader than just an approval layer. She has her unique cryptographic id, supports encrypted Dina-to-Dina messaging and a capability to setup a Trust Network which can be used to make decisions based on trust.
What works today: (1) User-owned cryptographic identity
(2) Encrypted persona vaults for health / finance / work / etc.
(3) Approval flows for sensitive access and risky actions
(4) PII scrub / rehydrate for outbound calls
(5) Encrypted Dina-to-Dina messaging
(6) Signed reviews, vouches someone/bot, flag user/bot etc in Trust Network
You can install it on your local machine and use the main flows through Telegram or Bluesky. You can store memories, get automated reminders, get answers which is enriched based on previous memories, watch approval flows, and test Dina-to-Dina messaging and the Trust Network.
Open Source. MIT Licensed. Technical preview status. 4,500+ Tests working. Built on Go/Python/TS.
The architecture decision I’m most opinionated about is that persona isolation should be cryptographic, not just application-level. If the system is hit by a prompt injection attack, application level security could be overcome. But here, they wouldn't be able to read a locked persona without user approval.
Known limits: rough edges since it is technical preview and agent control depends on agents like OpenClaw deciding to call Dina (not a forced call)
I am happy to go deeper in to any architectural decisions.