For example:
Imagine Robot A observes that an item is in Zone Z, and Robot B later needs to retrieve it. How do they share that context? Is it via:
- A structured memory layer (like a knowledge graph)?
- Centralized state in a RAG-backed store?
- Something simpler (or messier)?
I’m experimenting with using a shared knowledge graph as memory across agents—backed by RAG for unstructured input, and queryable for planning, dependencies, and task dispatch.Would love to know:
- Is anyone else thinking about shared memory across physical agents?
- How are you handling world state, task context, or coordination?
- Any frameworks or lessons you’ve found helpful?
Exploring this space and would really appreciate hearing from others who are building in or around it.Thanks!
scowler•15h ago