frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: A Multi-agent system where LLMs challenge each other's answers

2•AlphaSean•1mo ago
Hey HN,

I've been experimenting with forcing multiple LLMs to critique each other before producing a final answer.

In practice, I kept working around single-model limitations by opening multiple tabs, pasting the same question into different models, comparing responses, and then manually challenging each model with the others' arguments. (Maybe some of you can relate.) It worked sometimes, but it was cumbersome, slow, and hard to do systematically and efficiently.

Based on my own experiments, a few things seem to drive why different models arrive at different responses: they have different guardrails, different tendencies encoded in their weights, and different training data. And the biggest kicker of all: they still hallucinate. The question I wanted to test was whether making those differences explicit, rather than relying on one model to self-correct could reduce blind spots and improve the overall quality of the answer.

So I built Consilium9.

The problem: When a single LLM answers a contested question, it implicitly commits to one set of assumptions and presents that as the answer. Even when prompted for pros and cons, the result often feels like a simple list submitted for homework rather than a genuine clash of perspectives. There's no pressure for the model to defend weak points or respond to counterarguments.

The approach: Instead of relying on one model to self-critique, the system runs a simple (up to) three-round process across multiple models:

Initial positions (Example): Query Grok, Gemini, and GPT independently on the same question.

Critique round: Each model is shown the others responses and asked to identify flaws, missing context, or questionable assumptions.

Synthesis: A final position is produced by combining the strongest points from each side, explicitly calling out illogical or weak reasoning instead of smoothing them away.

The goal isn't chain-of-thought introspection, but exposing genuinely different model priors to each other and seeing what survives critique.

Example: LeBron vs. Jordan As a test, I ran the GOAT debate through the system. This obviously isn't a proper benchmark, but it's useful for seeing whether models will actually concede points when confronted with counterarguments. You can see it for yourself here: https://consilium9.com/#/s/TQzBqR8b

Round 1: Grok leaned Jordan (peak dominance, Finals record). Gemini leaned LeBron (longevity, cumulative stats).

Round 2: Gemini conceded Jordan's peak was higher. Grok conceded that LeBron's 50k points is statistically undeniable.

Synthesis: Instead of "they're both great," the final answer made the criteria explicit: Jordan if you weight peak dominance more heavily, LeBron if you value sustained production and longevity.

A single model can be prompted to do something similar, but in practice the concessions here tended to be sharper when they came from a different model family rather than self-reflection.

What I've observed so far: This helps most when the models start with genuinely different initial positions. If they broadly agree in round one, the critique step are not significant.

The critique round sometimes surfaces blind spots or unstated assumptions that don't appear in single-shot prompts.

So far, this approach is useful for decision-shaping questions that benefit from factoring in different perspectives, rather than for straightforward factual lookups or when you want to make sure to mitigate blind-spots as much as possible.

Stack: Frontend: React + Vite + Tailwind Backend: FastAPI (Python) DB: Supabase

What I'm curious about: The trade off here is clearly quality vs. latency/cost. I've found the quality boost is worth it for important queries or decision making tasks, but less so for simple and quick queries.

If you've tried similar architectures, whats your experience and observations with specific types of reasoning tasks?

Demo: https://consilium9.com

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
1•ArtemZ•2m ago•0 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•3m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
1•LiamPowell•5m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
2•duxup•7m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•9m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•21m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•23m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
2•savrajsingh•24m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•25m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•29m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•34m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
1•g1raffe•36m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•42m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
2•rolph•46m ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•48m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•53m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•54m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•57m ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•57m ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•58m ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
4•thread_id•1h ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1h ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•1h ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
2•paladin314159•1h ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1h ago•0 comments