Council just runs your prompt against multiple models at once and shows responses side-by-side. That's it.
A few things I noticed while building this:
1. Models disagree with each other way more than I expected. Ask anything slightly subjective or recent, and you'll get meaningfully different answers. It's made me much more skeptical of treating any single response as "the answer."
2. Different models have different failure modes. Claude tends to be cautious and hedge. GPT is confident even when wrong. Perplexity gives sources but sometimes misreads them. Seeing them together makes these patterns obvious.
3. For code, I actually like getting 2-3 different approaches. Even if one is clearly better, seeing alternatives helps me understand the tradeoffs.
Tech: Next.js, OpenRouter for model access, streaming responses in parallel. The annoying part was handling the UI when models respond at different speeds – you don't want the layout jumping around.
No login required to try it. Feedback welcome, especially on what's broken or annoying.