I'm impressed Superhuman seems to have handled this so well - lots of big names are fumbling with AI vuln disclosures. Grammarly is not necessarily who I would have bet on to get it right
0xferruccio•1h ago
The primary exfiltration vector for LLMs is making network requests via images with sensitive data as parameters.
As Claude Code increasingly uses browser tools, we may need to move away from .env files to something encrypted, kind of like rails credentials, but without the secret key in the .env
SahAssar•12m ago
So you are going to take the untrusted tool that kept leaking your secrets, keep the secrets away from it but still use it to code the thing that uses the secrets? Are you actually reviewing the code it produces? In 99% of cases that's a "no" or a soft "sometimes".
sarelta•4h ago