frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•20s ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•7m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
1•bediger4000•10m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
1•dabinat•11m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
2•doener•14m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•18m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•21m ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•24m ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•28m ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•30m ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•31m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
1•tolerance•32m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•32m ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
2•linkdd•33m ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•38m ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•39m ago•0 comments

AI-powered text correction for macOS

https://taipo.app/
1•neuling•43m ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•44m ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
2•y1n0•45m ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
4•bundie•50m ago•1 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
3•gnabgib•51m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•55m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
3•y1n0•56m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
4•calebhwin•56m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•1h ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•1h ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•1h ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•1h ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
3•rolph•1h ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•1h ago•3 comments
Open in hackernews

Show HN: A Multi-agent system where LLMs challenge each other's answers

2•AlphaSean•1mo ago
Hey HN,

I've been experimenting with forcing multiple LLMs to critique each other before producing a final answer.

In practice, I kept working around single-model limitations by opening multiple tabs, pasting the same question into different models, comparing responses, and then manually challenging each model with the others' arguments. (Maybe some of you can relate.) It worked sometimes, but it was cumbersome, slow, and hard to do systematically and efficiently.

Based on my own experiments, a few things seem to drive why different models arrive at different responses: they have different guardrails, different tendencies encoded in their weights, and different training data. And the biggest kicker of all: they still hallucinate. The question I wanted to test was whether making those differences explicit, rather than relying on one model to self-correct could reduce blind spots and improve the overall quality of the answer.

So I built Consilium9.

The problem: When a single LLM answers a contested question, it implicitly commits to one set of assumptions and presents that as the answer. Even when prompted for pros and cons, the result often feels like a simple list submitted for homework rather than a genuine clash of perspectives. There's no pressure for the model to defend weak points or respond to counterarguments.

The approach: Instead of relying on one model to self-critique, the system runs a simple (up to) three-round process across multiple models:

Initial positions (Example): Query Grok, Gemini, and GPT independently on the same question.

Critique round: Each model is shown the others responses and asked to identify flaws, missing context, or questionable assumptions.

Synthesis: A final position is produced by combining the strongest points from each side, explicitly calling out illogical or weak reasoning instead of smoothing them away.

The goal isn't chain-of-thought introspection, but exposing genuinely different model priors to each other and seeing what survives critique.

Example: LeBron vs. Jordan As a test, I ran the GOAT debate through the system. This obviously isn't a proper benchmark, but it's useful for seeing whether models will actually concede points when confronted with counterarguments. You can see it for yourself here: https://consilium9.com/#/s/TQzBqR8b

Round 1: Grok leaned Jordan (peak dominance, Finals record). Gemini leaned LeBron (longevity, cumulative stats).

Round 2: Gemini conceded Jordan's peak was higher. Grok conceded that LeBron's 50k points is statistically undeniable.

Synthesis: Instead of "they're both great," the final answer made the criteria explicit: Jordan if you weight peak dominance more heavily, LeBron if you value sustained production and longevity.

A single model can be prompted to do something similar, but in practice the concessions here tended to be sharper when they came from a different model family rather than self-reflection.

What I've observed so far: This helps most when the models start with genuinely different initial positions. If they broadly agree in round one, the critique step are not significant.

The critique round sometimes surfaces blind spots or unstated assumptions that don't appear in single-shot prompts.

So far, this approach is useful for decision-shaping questions that benefit from factoring in different perspectives, rather than for straightforward factual lookups or when you want to make sure to mitigate blind-spots as much as possible.

Stack: Frontend: React + Vite + Tailwind Backend: FastAPI (Python) DB: Supabase

What I'm curious about: The trade off here is clearly quality vs. latency/cost. I've found the quality boost is worth it for important queries or decision making tasks, but less so for simple and quick queries.

If you've tried similar architectures, whats your experience and observations with specific types of reasoning tasks?

Demo: https://consilium9.com