I wrote a short position paper arguing that current agentic AI safety failures are the confused deputy problem on repeat. We are handing agents ambient authority and trying to contain it with soft constraints like prompts and userland wrappers. My take: you need hard, reduce-only authority enforced at a real boundary (kernel control plane class), not something bypassable from userland. Curious how others are modeling this. What constraints do you think are truly non-negotiable?
Comments
mzajc•48m ago
Was this written with a LLM? If so, please add a note about it at the start of the README.
solidasparagus•47m ago
People want convenience more than they want security. No one wants permission grants to go away in minutes or hours. Every time the agent is stopped by permissions grant check, the average user experience is a little worse.
zb3•34m ago
> I wrote a short position
> "Reality check"
Hi GPT :)
twentyfiveoh1•21m ago
I thought "surely they wouldn't ...."
The issues in the article are more blatant.
You were right and caught it extremely quickly.
mzajc•48m ago