These tools don’t just suggest code they can read local files and run shell commands. That’s very powerful, but it also means a prompt injection (or poisoned context) can turn a “helpful assistant” into something that looks a lot like an attacker’s shell.
I noticed that Cursor has publicly patched prompt-injection issues, including ones that opened paths to arbitrary command execution. Some security research is increasingly focused on “zero-click” prompt injection against AI agents.
The architectural problem I keep running into is that most guardrails today are opt-in (“use my tools”) rather than enforced (“you can’t do this operation”). If the agent decides to use a native tool directly, policy checks often don’t exist or don’t fire (There are bugs across Claude, Github Copilot and others that make enforcement a pain as well in todays atmosphere)
So I’m experimenting with a small proof-of-concept around policy-as-code for agent action that can for example,
- block reads of sensitive files (.env, ~/.ssh/*, tokens)
- require approval before risky shell commands run
- keep an audit log of what the agent attempted
- where supported, enforce decisions before execution rather than relying on the model’s cooperation
I’d really value input from people using these tools in real teams:
Would you install something that blocks or asks approval before an agent reads secrets or runs risky commands?
Would your company pay for centrally managed policies and audit logs?
What’s the least annoying UX that still counts as “real security”?
If you’ve seen real incidents or if you think this whole thing is dumb, inevitable, or already solved by containers, I’d would love your genuine take