How it works:
You ask Claude to brainstorm a topic All configured models respond in parallel (Round 1) Claude reads their responses and pushes back with its own take Models see each other's responses and refine across rounds A synthesizer produces the final consolidated output Claude isn't just orchestrating — it has full conversation context, so it knows what you're working on and argues its position alongside the other models. They genuinely build on and challenge each other's ideas.
A 3-round debate with 3 models costs ~$0.02-0.05. One model failing doesn't kill the debate — results are resilient.
npm: npx brainstorm-mcp GitHub: https://github.com/spranab/brainstorm-mcp Sample debate (GPT-5.2 vs DeepSeek vs Claude): https://gist.github.com/spranab/c1770d0bfdff409c33cc9f985043...
Free, MIT licensed. Works with any OpenAI-compatible API including local Ollama.