frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•7m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•7m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
1•mooreds•8m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•8m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•10m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•12m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•13m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•14m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•16m ago•0 comments

Show HN: Vibe as a Code / VaaC – new approach to vibe coding

https://www.npmjs.com/package/@gace/vaac
1•bstrama•17m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•17m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•19m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
5•geox•23m ago•0 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
1•yi_wang•24m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•28m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•35m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
2•bediger4000•38m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•39m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
4•doener•41m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•45m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•49m ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•52m ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•56m ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•57m ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•59m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
2•tolerance•59m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•1h ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
2•linkdd•1h ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•1h ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•1h ago•0 comments
Open in hackernews

Show HN: LLM Hallucination Detector – Works with GPT, Claude, and Local Models

https://github.com/Mattbusel/LLM-Hallucination-Detection-Script
2•Shmungus•8mo ago
I built a lightweight hallucination detector that works with any LLM API.

It checks for signs of hallucinated or unreliable output using a multi-method approach (overconfidence patterns, factual density, coherence, contradictions, etc).

What it does:

Works with GPT, Claude, local models (e.g., Mistral, DialoGPT)

Outputs a hallucination probability (0.0–1.0)

Flags overconfident or uncertain language

Scores factual density, coherence, and contradictions

Compares responses to context (if provided)

Fully framework-agnostic — no extra dependencies

Built for production + research workflows

Benchmarked on 1,000+ samples:

F1: 0.75

AUC-ROC: 0.81

Fast: ~0.2s per response

Comes with plug-and-play examples:

OpenAI, Anthropic, local models

Flask API

Custom scoring configs

I’m giving this away free under MIT. Would love feedback, issues, PRs — or just to know if it helps you build safer LLM apps.

GitHub: https://github.com/Mattbusel/LLM-Hallucination-Detection-Scr...

Comments

Shmungus•8mo ago
Hi HN!

I’m excited to share this lightweight hallucination detector I built to help identify unreliable or “hallucinated” outputs from LLMs like GPT, Claude, and various local models.

It uses multiple methods — from spotting overconfidence and contradictions to scoring factual density and coherence — to give a hallucination probability score for any generated response.

It’s framework-agnostic, fast (~0.2s per response), and designed for both research and production use. Plus, it’s completely free under the MIT license.

I’d love to hear your thoughts, feedback, and if you find it useful for your projects. Happy to answer questions or discuss how it works under the hood!

Thanks for checking it out!

akoboldfrying•8mo ago
I'm impressed that you give precision and recall metrics for this -- and amazed that they are non-terrible. I'm amazed because a fully general hallucination detector is obviously a truth oracle -- it can answer any question about anything in the world, by framing the question as a statement and then asking whether that statement is a hallucination.

From among the analyses the tool makes, it makes sense to me that contradictions can be detected, since that doesn't require knowledge of the real world. I'm very interested in how you do this detection ("Logical inconsistencies") in practice. Likewise for "Logical progression".

Two questions:

1. Since "overconfidence" is treated as a red flag, won't applying your tool as a filter cause LLM response precision to drop, often unnecessarily? The safest answer an LLM can give to "When was the Eiffel Tower built?" is surely along the lines of "The Eiffel Tower may or may not have been built at some time in the past."

2. I don't see how this tool can detect the kind of hallucination that (a) involves no contradiction and (b) requires knowledge of the world. These come up often. Examples: Citing plausible-sounding but nonexistent court cases, calling plausible-sounding but nonexistent methods in an API.

Shmungus•8mo ago
Thanks, really appreciate the thoughtful questions and skepticism (and totally agree: a “perfect” hallucination detector would be a truth oracle).

To your points:

1. Overconfidence and precision You're right that filtering on overconfidence alone could tank precision. That’s why the tool doesn’t treat it as a strict red flag, it’s one of several signals, and the final hallucination score is a weighted combination of multiple metrics (confidence, density, contradictions, progression, etc.). Overconfident phrasing tends to correlate with hallucinations in aggregate, but the idea is never to penalize all confident answers, just to flag the ones where that confidence is unjustified by the context or content.

2. Detecting hallucinations that require world knowledge Absolutely, those are the hardest cases. This tool doesn’t solve that. Instead, it acts as a proxy evaluator:

Factual density gives a rough measure of “how many claims are being made”

Overconfidence vs. ambiguity highlights where a model might be bluffing

Logical coherence and contradiction checks flag when a model violates internal structure (not ground truth) But it won’t catch the subtle world-knowledge misses (like fake court cases or made-up API calls) unless you pair it with a grounded context or use external validators.

The long-term hope is: use this tool to raise suspicion, not declare judgment. It's a cheap sanity layer, a “weak oracle” that’s fast, pluggable, and good enough to catch the dumb stuff before you escalate to expensive validators or human review.