Most examples today rely on in-prompt chaining — e.g., a single call where “Agent A does X, then Agent B uses A’s output,” all within one synchronous prompt. This works, but it doesn’t scale well and mixes orchestration logic with prompt logic.
I’m more interested in asynchronous, decoupled orchestration, where:
- Agent A runs independently, produces an artifact/state,
- and Agent B is invoked later (event- or task-driven) to pick up that output.
Curious how people are handling this in practice:
- Are you using message queues, event buses, CRON/temporal workflows, serverless functions, or custom schedulers?
- How are you persisting and passing state between agents?
- Any patterns emerging for error handling, retries, or versioning agent behaviors?
- Are you treating LLM “agents” like microservices, or is there a better abstraction?
- Would appreciate hearing what architectures or frameworks have worked (or not worked) for you.