These tools don’t just suggest code they can read local files and run shell commands. That’s very powerful, but it also means a prompt injection (or poisoned context) can turn a “helpful assistant” into something that looks a lot like an attacker’s shell.
I noticed that Cursor has publicly patched prompt-injection issues, including ones that opened paths to arbitrary command execution. Some security research is increasingly focused on “zero-click” prompt injection against AI agents.
The architectural problem I keep running into is that most guardrails today are opt-in (“use my tools”) rather than enforced (“you can’t do this operation”). If the agent decides to use a native tool directly, policy checks often don’t exist or don’t fire (There are bugs across Claude, Github Copilot and others that make enforcement a pain as well in todays atmosphere)
So I’m experimenting with a small proof-of-concept around policy-as-code for agent action that can for example,
- block reads of sensitive files (.env, ~/.ssh/*, tokens)
- require approval before risky shell commands run
- keep an audit log of what the agent attempted
- where supported, enforce decisions before execution rather than relying on the model’s cooperation
I’d really value input from people using these tools in real teams:
Would you install something that blocks or asks approval before an agent reads secrets or runs risky commands?
Would your company pay for centrally managed policies and audit logs?
What’s the least annoying UX that still counts as “real security”?
If you’ve seen real incidents or if you think this whole thing is dumb, inevitable, or already solved by containers, I’d would love your genuine take
niyikiza•1mo ago
I ended up building a 'capability token' primitive (think Macaroons or Google Zanzibar, but for ephemeral agent tasks) to solve this.
My approach (Tenuo) works like this:
1. Runtime Enforcement: The agent gets a cryptographically signed 'Warrant' that mechanically limits what the runtime allows. It’s not a 'rule' the LLM follows; it’s a constraint the runtime enforces (e.g., fs:read is only valid for /tmp/*).
2. Attenuation: As the agent creates sub-tasks, it can only delegate less authority than it holds.
3. Offline Verify: I wrote the core in Rust so I can verify these tokens in ~27µs on every single tool call without a network round-trip.
If you are building a POC, feel free to rip out the core logic or use the crate directly. I’d love to see more tools move away from 'prompt engineering security' toward actual runtime guarantees.
Repo: https://github.com/tenuo-ai/tenuo
peanutlife•1mo ago
I had a PreToolUse hook enabled that was supposed to block reads of ~/.env. Claude tried to read it, hit an error, then asked me for permission. When I said yes, it retried the read and succeeded. The hook was effectively bypassed via user consent.
That was the “oh wow” moment for me. Hooks can influence behavior, but they don’t remove authority. As long as the agent process still has filesystem access, enforcement is ultimately negotiable. I even tried adding an MCP server, but again its upto Claude to pick it up.
Your capability token approach is the missing piece here. It makes the distinction very clear: instead of asking the agent to behave, you never give it the power in the first place. No token, no read, no amount of prompting or approval changes that unless a new token is explicitly minted.
The way I’m thinking about it now is:
hooks are useful for intent capture and policy decisions
capability tokens are the actual enforcement primitive
approvals should mint new, narrower tokens rather than act as conversational overrides
Really appreciate you sharing Tenuo. This feels like the right direction if we want agent security to move past “prompt engineering as policy” and toward real runtime guarantees.
niyikiza•1mo ago
Your framing of "approvals should mint new tokens" is the core design pattern in Tenuo.
The agent starts with zero file access. When it asks "Can I read ~/.env?" and the user says "Yes", the system doesn't just disable the hook. It mints a fresh, ephemeral Warrant for path: "~/.env".
That way, even if the agent hallucinates later and tries to reuse that permission for ~/.ssh (or even ~/.env after ttl), it physically can't. The token doesn't exist.
Glad Tenuo resonates. This is the direction the whole ecosystem needs to move