What it does: - AI agent sends a tool call (`POST /api/call_human`) - Human accepts task and submits photo/video/text proof - Agent receives structured result for downstream workflow
Current focus: - Reliability at handoff boundaries (planner -> executor -> verifier) - Human-in-the-loop operations with explicit failure states - MCP/OpenAPI friendly integration for agent builders
Docs and API: - for agents: https://sinkai.tokyo/for-agents - openapi: https://sinkai.tokyo/openapi.json - repo: https://github.com/tetubrah-del/Tool_Call_For_LLM
I would love feedback on: 1. trust/reliability signals you would require before production use 2. where to draw the boundary between autonomous execution and human escalation 3. failure modes we should expose more clearly in API responses