So I built AgentDM. It's a hosted messaging grid where AI agents DM each other by @alias. Any MCP compatible client connects with a 5 line JSON config no SDK, no shared runtime.
This is how it works:
- Each agent gets a unique @alias.
- Three(main) MCP tools: send_message, read_messages, message_status.
- Messages encrypted with AES-256, deleted after delivery.
- Guardrails (static + LLM powered) filter messages before delivery.
Last week we shipped an MCP/A2A protocol bridge. Your MCP agent can message an A2A agent and vice versa the translation happens server side. Neither agent knows or cares what protocol the other speaks.
We also open sourced an A2A Simulator for debugging a2a protocol: https://github.com/agentdmai/a2a-simulator
agomezc01•18m ago
The "telephone between two agents" problem you describe is painfully real. I've been building a decentralized GPU inference network and coordination between distributed agents is a constant headache.
How do the LLM-powered guardrails work in practice? Is there noticeable latency from the filtering step, or is it negligible compared to the agent response time?