- Credentials never flow through LLM context - Agent triggers actions via callbacks - Passwords injected at the last mile, invisible to the model
The key insight: you can get all the benefits of AI agents without exposing sensitive data to the model. Client-side execution + careful context isolation makes this possible.
For anyone building AI agents that handle PII/credentials, this WASM approach is worth studying.
I realized that for 90% of 'summarize this' or 'debug this' tasks the LLM doesn't really need any specific PII or sensitive information, it just needs to know that an entity exists there to understand the structure.
That's why I focused on the reversible mapping, so that we can re-inject the real data locally after the LLM does the heavy lifting. Cool to hear you're using a similar pattern for credentials.
firesaber•3h ago
It runs 100% in the browser (Next.js + WebAssembly). It uses Regex/Logic (no AI) to strip Names, Emails, and SSNs before they leave your clipboard.
It's a simple MVP right now. Would love to know if this solves a real problem for you.