frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: A Multi-agent system where LLMs challenge each other's answers

2•AlphaSean•2h ago
Hey HN,

I've been experimenting with forcing multiple LLMs to critique each other before producing a final answer.

In practice, I kept working around single-model limitations by opening multiple tabs, pasting the same question into different models, comparing responses, and then manually challenging each model with the others' arguments. (Maybe some of you can relate.) It worked sometimes, but it was cumbersome, slow, and hard to do systematically and efficiently.

Based on my own experiments, a few things seem to drive why different models arrive at different responses: they have different guardrails, different tendencies encoded in their weights, and different training data. And the biggest kicker of all: they still hallucinate. The question I wanted to test was whether making those differences explicit, rather than relying on one model to self-correct could reduce blind spots and improve the overall quality of the answer.

So I built Consilium9.

The problem: When a single LLM answers a contested question, it implicitly commits to one set of assumptions and presents that as the answer. Even when prompted for pros and cons, the result often feels like a simple list submitted for homework rather than a genuine clash of perspectives. There's no pressure for the model to defend weak points or respond to counterarguments.

The approach: Instead of relying on one model to self-critique, the system runs a simple (up to) three-round process across multiple models:

Initial positions (Example): Query Grok, Gemini, and GPT independently on the same question.

Critique round: Each model is shown the others responses and asked to identify flaws, missing context, or questionable assumptions.

Synthesis: A final position is produced by combining the strongest points from each side, explicitly calling out illogical or weak reasoning instead of smoothing them away.

The goal isn't chain-of-thought introspection, but exposing genuinely different model priors to each other and seeing what survives critique.

Example: LeBron vs. Jordan As a test, I ran the GOAT debate through the system. This obviously isn't a proper benchmark, but it's useful for seeing whether models will actually concede points when confronted with counterarguments. You can see it for yourself here: https://consilium9.com/#/s/TQzBqR8b

Round 1: Grok leaned Jordan (peak dominance, Finals record). Gemini leaned LeBron (longevity, cumulative stats).

Round 2: Gemini conceded Jordan's peak was higher. Grok conceded that LeBron's 50k points is statistically undeniable.

Synthesis: Instead of "they're both great," the final answer made the criteria explicit: Jordan if you weight peak dominance more heavily, LeBron if you value sustained production and longevity.

A single model can be prompted to do something similar, but in practice the concessions here tended to be sharper when they came from a different model family rather than self-reflection.

What I've observed so far: This helps most when the models start with genuinely different initial positions. If they broadly agree in round one, the critique step are not significant.

The critique round sometimes surfaces blind spots or unstated assumptions that don't appear in single-shot prompts.

So far, this approach is useful for decision-shaping questions that benefit from factoring in different perspectives, rather than for straightforward factual lookups or when you want to make sure to mitigate blind-spots as much as possible.

Stack: Frontend: React + Vite + Tailwind Backend: FastAPI (Python) DB: Supabase

What I'm curious about: The trade off here is clearly quality vs. latency/cost. I've found the quality boost is worth it for important queries or decision making tasks, but less so for simple and quick queries.

If you've tried similar architectures, whats your experience and observations with specific types of reasoning tasks?

Demo: https://consilium9.com

Zpdf: PDF text extraction in Zig – 8x faster than MuPDF

https://github.com/Lulzx/zpdf
1•lulzx•32s ago•1 comments

The $100M Mistake: Microsoft PhotoDraw

https://www.youtube.com/watch?v=bGmXCb-irJ0
1•bane•1m ago•0 comments

Desktop Classic System – Spacial computing hearkening back to classic Mac OS

https://mycophobia.org/dcs/
2•todsacerdoti•3m ago•0 comments

Can a Commodore 1541 Disk Drive Be Used as a General Purpose Computer? [video]

https://www.youtube.com/watch?v=6loDwvG4CP8
2•ok123456•5m ago•0 comments

Newsly – Analyze Polymarket Events

https://newsly.studio
1•popcornisgold•6m ago•0 comments

Growing Reddit Topics

https://freesubstats.com/topics
1•jamboy•6m ago•0 comments

A Modern Recommender Model Architecture

https://cprimozic.net/blog/anime-recommender-model-architecture/
1•Ameo•6m ago•0 comments

Ask HN: My multi-agent financial sentiment architecture

1•CLCKKKKK•8m ago•0 comments

Easily create and view 3D splat files from 2D images with Apple's ML Sharp model

https://github.com/boutell/ml-sharp-ez
2•boutell•8m ago•0 comments

Troy: Turkish Payment Method Alternative

https://www.troyodeme.com/en
2•Fethbita•8m ago•0 comments

Show HN: Mirror_hash – Hash anything with C++ reflection

https://github.com/FranciscoThiesen/mirror_hash
1•fthiesen•10m ago•0 comments

John Carey obituary: literary critic

https://www.thetimes.com/uk/obituaries/article/john-carey-obituary-literary-critic-mxjvmfxml
1•Caiero•10m ago•0 comments

Show HN: Apache TacticalMesh – Open-source tactical mesh networking for defense

https://github.com/TamTunnel/Apache-TacticalMesh
1•pp10•11m ago•0 comments

Yet, another temple, but now in space

https://orbitaltemple.art/
2•pavoniedson•11m ago•0 comments

Show HN: Brennerbot.org – Generalizing the scientific methods of Sydney Brenner

https://brennerbot.org
1•eigenvalue•13m ago•0 comments

Common-Mode Chokes PDF (2006)

https://remoteqth.com/img/ZAW-WIKI/cmcc/CommonModeChokesW1HIS.pdf
1•crymer11•14m ago•0 comments

China mandates 50% domestic equipment rule for chipmakers

https://www.reuters.com/world/china/china-mandates-50-domestic-equipment-rule-chipmakers-sources-...
2•novaRom•15m ago•0 comments

We Fall for Narcissistic Leaders, Starting in Grade School

https://www.nytimes.com/2025/12/29/opinion/why-we-fall-for-narcissistic-leaders-starting-in-grade...
2•whack•18m ago•0 comments

The Napoleon of Notting Hill

https://en.wikipedia.org/wiki/The_Napoleon_of_Notting_Hill
1•tosh•19m ago•0 comments

Some Flexibility with Go's Sumdb

https://blog.yossarian.net/2025/12/29/Some-flexibility-with-Go-s-sumdb
1•woodruffw•19m ago•0 comments

Sustainable 3D printing using rapid-set clay concrete with biobased additives

https://link.springer.com/article/10.1007/s42114-025-01456-1
1•PaulHoule•19m ago•0 comments

DnsMesh for Kubernetes Workloads

1•woodprogrammer•20m ago•0 comments

RIP MTV – 44 of the Best Moments

https://www.thatericalper.com/2025/12/30/r-i-p-mtv-here-are-44-of-the-best-moments-from-your/
1•thm•20m ago•0 comments

Updated CI/CD for KiCad 9 and GitLab

https://sschueller.github.io/posts/ci-cd-with-kicad-2025/
1•sschueller•22m ago•0 comments

Luna – Space Simulation

https://luna.watermelonson.com/
1•thunderbong•24m ago•0 comments

2025 Bitcoin Node Performance Tests

https://blog.lopp.net/2025-bitcoin-node-performance-tests/
1•enz•27m ago•0 comments

M-Lab: Measure the Internet, save the data, and make it accessible and useful

https://www.measurementlab.net/
1•tanelpoder•27m ago•0 comments

Foreign tech workers are avoiding travel to the US

https://www.computerworld.com/article/4110681/foreign-tech-workers-are-avoiding-travel-to-the-us....
16•CrankyBear•29m ago•4 comments

David Long's "Adventure 6" (LONG0751) has been found

https://quuxplusone.github.io/blog/2025/12/29/long0751/
2•quuxplusone•30m ago•0 comments

Greenhouse Gas Emission Data: Public, difficult to access and not always correct [video]

https://media.ccc.de/v/39c3-greenhouse-gas-emission-data-public-difficult-to-access-and-not-alway...
1•hannob•30m ago•0 comments