I've been experimenting with forcing multiple LLMs to critique each other before producing a final answer.
In practice, I kept working around single-model limitations by opening multiple tabs, pasting the same question into different models, comparing responses, and then manually challenging each model with the others' arguments. (Maybe some of you can relate.) It worked sometimes, but it was cumbersome, slow, and hard to do systematically and efficiently.
Based on my own experiments, a few things seem to drive why different models arrive at different responses: they have different guardrails, different tendencies encoded in their weights, and different training data. And the biggest kicker of all: they still hallucinate. The question I wanted to test was whether making those differences explicit, rather than relying on one model to self-correct could reduce blind spots and improve the overall quality of the answer.
So I built Consilium9.
The problem: When a single LLM answers a contested question, it implicitly commits to one set of assumptions and presents that as the answer. Even when prompted for pros and cons, the result often feels like a simple list submitted for homework rather than a genuine clash of perspectives. There's no pressure for the model to defend weak points or respond to counterarguments.
The approach: Instead of relying on one model to self-critique, the system runs a simple (up to) three-round process across multiple models:
Initial positions (Example): Query Grok, Gemini, and GPT independently on the same question.
Critique round: Each model is shown the others responses and asked to identify flaws, missing context, or questionable assumptions.
Synthesis: A final position is produced by combining the strongest points from each side, explicitly calling out illogical or weak reasoning instead of smoothing them away.
The goal isn't chain-of-thought introspection, but exposing genuinely different model priors to each other and seeing what survives critique.
Example: LeBron vs. Jordan As a test, I ran the GOAT debate through the system. This obviously isn't a proper benchmark, but it's useful for seeing whether models will actually concede points when confronted with counterarguments. You can see it for yourself here: https://consilium9.com/#/s/TQzBqR8b
Round 1: Grok leaned Jordan (peak dominance, Finals record). Gemini leaned LeBron (longevity, cumulative stats).
Round 2: Gemini conceded Jordan's peak was higher. Grok conceded that LeBron's 50k points is statistically undeniable.
Synthesis: Instead of "they're both great," the final answer made the criteria explicit: Jordan if you weight peak dominance more heavily, LeBron if you value sustained production and longevity.
A single model can be prompted to do something similar, but in practice the concessions here tended to be sharper when they came from a different model family rather than self-reflection.
What I've observed so far: This helps most when the models start with genuinely different initial positions. If they broadly agree in round one, the critique step are not significant.
The critique round sometimes surfaces blind spots or unstated assumptions that don't appear in single-shot prompts.
So far, this approach is useful for decision-shaping questions that benefit from factoring in different perspectives, rather than for straightforward factual lookups or when you want to make sure to mitigate blind-spots as much as possible.
Stack: Frontend: React + Vite + Tailwind Backend: FastAPI (Python) DB: Supabase
What I'm curious about: The trade off here is clearly quality vs. latency/cost. I've found the quality boost is worth it for important queries or decision making tasks, but less so for simple and quick queries.
If you've tried similar architectures, whats your experience and observations with specific types of reasoning tasks?
Demo: https://consilium9.com