This post got viral on reddit as users have a tendency to not put secrets (like api keys etc.) in .env but instead paste it in the chat and let agents wire it up
Agents like claude code/openclaw save secrets in plaintext within config files, which makes a big attack vector for a local compromise becoming a cloud compromise.
We empirically verified to stop AI coding agents from leaking secrets by intercepting tool calls and handling secrets entirely outside the model’s visibility. Using Claude Code’s hook system.
Paired with open source repo for cleanup, it shows that most leakage can be eliminated by treating secrets as a runtime dataflow problem rather than a static scanning issue
noobcoder•1h ago
Agents like claude code/openclaw save secrets in plaintext within config files, which makes a big attack vector for a local compromise becoming a cloud compromise.
We empirically verified to stop AI coding agents from leaking secrets by intercepting tool calls and handling secrets entirely outside the model’s visibility. Using Claude Code’s hook system.
Paired with open source repo for cleanup, it shows that most leakage can be eliminated by treating secrets as a runtime dataflow problem rather than a static scanning issue