frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
2•RebelPotato•1m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
1•dev_tty01•4m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•5m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•13m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•13m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
1•mooreds•14m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•14m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•16m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•18m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•19m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•20m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•22m ago•0 comments

Show HN: Isolating AI-generated code from human code | Vibe as a Code

https://www.npmjs.com/package/@gace/vaac
1•bstrama•23m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•23m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•25m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
6•geox•29m ago•0 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
1•yi_wang•30m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•34m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•41m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
2•bediger4000•44m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•45m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
5•doener•47m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•52m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•55m ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•58m ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•1h ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•1h ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•1h ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
2•tolerance•1h ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•1h ago•1 comments
Open in hackernews

Show HN: A Multi-agent system where LLMs challenge each other's answers

2•AlphaSean•1mo ago
Hey HN,

I've been experimenting with forcing multiple LLMs to critique each other before producing a final answer.

In practice, I kept working around single-model limitations by opening multiple tabs, pasting the same question into different models, comparing responses, and then manually challenging each model with the others' arguments. (Maybe some of you can relate.) It worked sometimes, but it was cumbersome, slow, and hard to do systematically and efficiently.

Based on my own experiments, a few things seem to drive why different models arrive at different responses: they have different guardrails, different tendencies encoded in their weights, and different training data. And the biggest kicker of all: they still hallucinate. The question I wanted to test was whether making those differences explicit, rather than relying on one model to self-correct could reduce blind spots and improve the overall quality of the answer.

So I built Consilium9.

The problem: When a single LLM answers a contested question, it implicitly commits to one set of assumptions and presents that as the answer. Even when prompted for pros and cons, the result often feels like a simple list submitted for homework rather than a genuine clash of perspectives. There's no pressure for the model to defend weak points or respond to counterarguments.

The approach: Instead of relying on one model to self-critique, the system runs a simple (up to) three-round process across multiple models:

Initial positions (Example): Query Grok, Gemini, and GPT independently on the same question.

Critique round: Each model is shown the others responses and asked to identify flaws, missing context, or questionable assumptions.

Synthesis: A final position is produced by combining the strongest points from each side, explicitly calling out illogical or weak reasoning instead of smoothing them away.

The goal isn't chain-of-thought introspection, but exposing genuinely different model priors to each other and seeing what survives critique.

Example: LeBron vs. Jordan As a test, I ran the GOAT debate through the system. This obviously isn't a proper benchmark, but it's useful for seeing whether models will actually concede points when confronted with counterarguments. You can see it for yourself here: https://consilium9.com/#/s/TQzBqR8b

Round 1: Grok leaned Jordan (peak dominance, Finals record). Gemini leaned LeBron (longevity, cumulative stats).

Round 2: Gemini conceded Jordan's peak was higher. Grok conceded that LeBron's 50k points is statistically undeniable.

Synthesis: Instead of "they're both great," the final answer made the criteria explicit: Jordan if you weight peak dominance more heavily, LeBron if you value sustained production and longevity.

A single model can be prompted to do something similar, but in practice the concessions here tended to be sharper when they came from a different model family rather than self-reflection.

What I've observed so far: This helps most when the models start with genuinely different initial positions. If they broadly agree in round one, the critique step are not significant.

The critique round sometimes surfaces blind spots or unstated assumptions that don't appear in single-shot prompts.

So far, this approach is useful for decision-shaping questions that benefit from factoring in different perspectives, rather than for straightforward factual lookups or when you want to make sure to mitigate blind-spots as much as possible.

Stack: Frontend: React + Vite + Tailwind Backend: FastAPI (Python) DB: Supabase

What I'm curious about: The trade off here is clearly quality vs. latency/cost. I've found the quality boost is worth it for important queries or decision making tasks, but less so for simple and quick queries.

If you've tried similar architectures, whats your experience and observations with specific types of reasoning tasks?

Demo: https://consilium9.com