The idea here is different: Steward runs in the background, watches signals from tools like GitHub, email, calendar, chat, and screen context, and tries to move low-risk work forward before the user explicitly asks.
The core mechanism is a policy gate between perception and execution. Low-risk and reversible actions can be handled automatically with an audit trail. Higher-risk or irreversible actions must be escalated for explicit approval. Instead of constant notifications, the system is designed to brief the user periodically on what was done, what is pending, and what actually needs judgment.
Right now it is an early local-first prototype. It runs with a simple `make start`, opens a local dashboard, and uses an OpenAI-compatible API key.
I’d love feedback on a few things:
* whether “policy-gated autonomy” is the right abstraction for this kind of agent * where the boundary should be between silent automation and interruption * how people would structure connectors and context aggregation for a system like this