frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Reverse Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
1•pacod•3m ago•0 comments

The AI4Agile Practitioners Report 2026

https://age-of-product.com/ai4agile-practitioners-report-2026/
1•swolpers•4m ago•0 comments

Digital Independence Day

https://di.day/
1•pabs3•7m ago•0 comments

What a bot hacking attempt looks like: SQL injections galore

https://old.reddit.com/r/vibecoding/comments/1qz3a7y/what_a_bot_hacking_attempt_looks_like_i_set_up/
1•cryptoz•9m ago•0 comments

Show HN: FlashMesh – An encrypted file mesh across Google Drive and Dropbox

https://flashmesh.netlify.app
1•Elevanix•10m ago•0 comments

Show HN: AgentLens – Open-source observability and audit trail for AI agents

https://github.com/amitpaz1/agentlens
1•amit_paz•10m ago•0 comments

Show HN: ShipClaw – Deploy OpenClaw to the Cloud in One Click

https://shipclaw.app
1•sunpy•13m ago•0 comments

Unlock the Power of Real-Time Google Trends Visit: Www.daily-Trending.org

https://daily-trending.org
1•azamsayeedit•15m ago•1 comments

Explanation of British Class System

https://www.youtube.com/watch?v=Ob1zWfnXI70
1•lifeisstillgood•16m ago•0 comments

Show HN: Jwtpeek – minimal, user-friendly JWT inspector in Go

https://github.com/alesr/jwtpeek
1•alesrdev•19m ago•0 comments

Willow – Protocols for an uncertain future [video]

https://fosdem.org/2026/schedule/event/CVGZAV-willow/
1•todsacerdoti•20m ago•0 comments

Feedback on a client-side, privacy-first PDF editor I built

https://pdffreeeditor.com/
1•Maaz-Sohail•24m ago•0 comments

Clay Christensen's Milkshake Marketing (2011)

https://www.library.hbs.edu/working-knowledge/clay-christensens-milkshake-marketing
2•vismit2000•31m ago•0 comments

Show HN: WeaveMind – AI Workflows with human-in-the-loop

https://weavemind.ai
7•quentin101010•36m ago•1 comments

Show HN: Seedream 5.0: free AI image generator that claims strong text rendering

https://seedream5ai.org
1•dallen97•38m ago•0 comments

A contributor trust management system based on explicit vouches

https://github.com/mitchellh/vouch
2•admp•40m ago•1 comments

Show HN: Analyzing 9 years of HN side projects that reached $500/month

2•haileyzhou•40m ago•0 comments

The Floating Dock for Developers

https://snap-dock.co
2•OsamaJaber•41m ago•0 comments

Arcan Explained – A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
2•walterbell•43m ago•0 comments

We are not scared of AI, we are scared of irrelevance

https://adlrocha.substack.com/p/adlrocha-we-are-not-scared-of-ai
1•adlrocha•44m ago•0 comments

Quartz Crystals

https://www.pa3fwm.nl/technotes/tn13a.html
2•gtsnexp•46m ago•0 comments

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
2•suvankar_m•48m ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
2•xipz•50m ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•54m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
2•baruchel•56m ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
2•birdculture•57m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•58m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
2•raleobob•1h ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•1h ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
2•jingkai_he•1h ago•0 comments
Open in hackernews

The Nature of Hallucinations

https://blog.qaware.de/posts/nature-of-hallucinations/
15•baquero•4mo ago

Comments

baquero•4mo ago
Why do language models sometimes just make things up? We’ve all experienced it: you ask a question, get a confident-sounding answer—and it’s wrong, but it sounds convincing. Even when you know the answer is false, the model insists on it. To this day, this problem can be reduced, but not eliminated.
partomniscient•4mo ago
Title should be amended to "Nature of AI Hallucinations".

The first line "Why do language models sometimes just make things up?" was not what I was expecting to read about.

add-sub-mul-div•4mo ago
It's probably futile by now to fight that "hallucination" and "slop" have become synonyms for AI output and the AI context will be their most common or default use going forward.

Regardless of whether those terms in the AI context correlate perfectly to their original meanings.

Uehreka•4mo ago
I remember super clearly the first time an LLM told me “No.” It was in May when I was using Copilot in VS Code and switched from Claude 3.7 Sonnet to Claude Sonnet 4. I asked Sonnet 4 to do something 3.7 Sonnet had been struggling with (something involving the FasterLivePortrait project in Python) and it told me that what I was asking for was not possible and explained why.

I get that this is different from getting an LLM to admit that it doesn’t know something, but I thought “getting a coding agent to stop spinning its wheels when set to an impossible task” was months or years away, and then suddenly it was here.

I haven’t yet read a good explanation of why Claude 4 is so much better at this kind of thing, and it definitely goes against what most people say about how LLMs are supposed to work (which is a large part of why I’ve been telling people to stop leaning on mechanical explanations of LLM behavior/strengths/weaknesses). However it was definitely a step-function improvement.

cainxinth•4mo ago
Yet LLMs also sometimes erroneously claim they cannot do something they can.
s-macke•4mo ago
Like they learn facts by heart, they learn what they can’t by heart as well.

Ask them to solve one of the Millennium Prize Problems. They’ll say they can’t do it, but that 'No' is just memorized. There’s nothing behind it.

Panzerschrek•4mo ago
I find the term "hallucination" very misleading. What LLMs produce means really "lie" or "misinformation". The term "hallucination" is so common nowadays only because corporations developing LLMs prefer using it rather than saying the truth, that their models are just huge machines for making things up. I am still wondering, why there are no legal consequences for authors of these LLMs because of that.
leobg•4mo ago
“Confabulation” is the better term imho (literally: making things up). But I guess OpenAI et al stuck to “hallucination” because it generalizes across text, audio and image generation.
s-macke•4mo ago
Author here. The discussion about this wording is actually the opening section of the article.

> Unfortunately, the term hallucination quickly stuck to this phenomenon — before any psychologist could object.

vrighter•4mo ago
There's no such thing as "llm hallucinations". For there to be there has to be an objective, rigorous way to distinguish them from non-hallucinations. Which doesn't exist. They walk like the "good" output, they quack like the "good" output, they are indistinguishable from the "good" output.

The only difference between the two is whether a human likes it. If the human doesn't like it, then it's a hallucination. If the human doesn't know it's wrong, then it's not a hallucination (as far as that user is concerned).

The term "hallucination" is just marketing BS. In any other case it'd be called "broken shit".

The term hallucination is used as if the network is somehow giving the wrong output. It's not. It's giving a probability distribution for the next token. Exactly what it was designed for. The misunderstanding is what the user thinks they are asking. They think they are asking for a correct answer, but they are instead asking for a plausible answer. Very different things. An LLM is designed to give plausible, not correct answers. And when a user asks for a plausible, but not necessarily correct, answer (whether or not they realize it) and they get a plausible but not necessarily correct answer, then the LLM is working exactly as intended.

s-macke•4mo ago
Author here. You’ve just summarized the main part of the article. To keep things simple, the focus is on pure facts. But yes, the outcome of next token prediction is much more profound than wrong facts.