Either I had a feature idea I wanted an agent to build right then, or I was worried my agents were blocked waiting on my decision.
It dawned on me: humans are just another dependency in an agent workflow, so I turned myself into a tool-call.
I built an iOS app (Extendo) where agents can reach me to request approvals, choices, or plan reviews. They just use a CLI tool and skill. My phone buzzes. I answer in seconds. The agent gets back to work.
The key: the agent blocks until you respond, and receives your answer along with your verbal feedback.
What you can do from your phone:
- approvals and checklists
- option buttons and rankings
- markdown plan reviews (tap-hold individual paragraphs to add voice comments is so satisfying!)
- kanban boards
- voice responses
- capture ideas on Apple Watch/Action Button and dispatch them to the right agent later
It’s a voice-first native iOS interface with push notifications. Push notifications are critical: the interaction needs to take seconds, not minutes.
```
extendo artifact create my_server implementation-choice --type multiple_choice --title "Where should we implement the rate limiter?" --option "backend:Backend API" --option "core:Core Library" --option "edge:Edge/CDN" --option "gateway:API Gateway"
```
If an agent can run bash, it can reach you.
I’ve been using it with Claude Code, OpenClaw, Pi, and custom scripts.
The backend protocol is open: you should self-host for tighter integration with your system (though there's a shared server available). There’s also an OpenClaw plugin and a Claude Code harness in the repo, a core library, and sample code to customize your own backend.
I used Extendo to build Extendo: design decisions, approvals, plan reviews, prioritization. Agents coded. I made decisions while walking the dog and between sets at the gym.
*Links*
3-min demo: https://www.youtube.com/watch?v=X5Dv9fU7Lb8
TestFlight: https://testflight.apple.com/join/PGHRCnQ4