The core insight: AI agents are users, not applications. Applications need credential values to authenticate. Agents just need to make authenticated calls. Those are different things.
AgentSecrets sits between the agent and the upstream API. The agent says "use STRIPE_KEY". The proxy resolves the real value from the OS keychain, injects it into the request at the transport layer, and returns only the response. The key never enters agent memory.
Technical details: -Local HTTP proxy on localhost:8765 with session token (blocks rogue processes on same machine) -OS keychain backed — macOS Keychain, Linux Secret Service, Windows Credential Manager -6 injection styles: bearer, basic, custom header, query param, JSON body, form field -SSRF protection blocking private IPs and non-HTTPS targets -Redirect stripping — auth headers not forwarded on redirects -JSONL audit log — key names only, no value field in the struct, structurally impossible to log values -MCP server for Claude Desktop and Cursor -Native OpenClaw skill -Global storage mode config — set keychain-only once during init, applies everywhere
Honest limitations: if a malicious skill has independent network access outside AgentSecrets it can still make its own calls. This removes credentials as an attack surface specifically, not every attack surface.
For the specific attack that just hit 30,000 OpenClaw users — a malicious skill exfiltrating plaintext credentials — it is structurally prevented. The keys were never on the filesystem. MIT, open source.
gauravguitara•1h ago
verdverm•21m ago