frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
1•Willingham•2m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•3m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•8m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
1•mooreds•9m ago•1 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•10m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•11m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•16m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•18m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•18m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•18m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•20m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•21m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•21m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•22m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•27m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•28m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•29m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•30m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•31m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•32m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•34m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•34m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•35m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•35m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•36m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•38m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•39m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•40m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•40m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
2•mooreds•41m ago•0 comments
Open in hackernews

The problem with LLMs isn't hallucination, it's context specific confidence

https://www.signalfire.com/blog/llm-hallucinations-arent-bugs
4•kerwioru9238492•3mo ago

Comments

zviugfd•3mo ago
It feels like most safety work is turning LLMs into overly cautious assistants and I like how this points out that we could be trading away imagination for a false sense of reliability.
alganet•3mo ago
> Humans don’t get rewarded for saying “I don’t know” to every question, because that’s not useful.

Humans get rewarded for thinking "I don't know", a lot. That's why it's hard to compare.

> A model that always bluffs

A model doesn't bluff. It feels to us humans that they bluff, but there is no bluff mechanics in play. The model doesn't assess the prompter's ability to call their bluff. It's not hiding that it doesn't know something. It's just not reached a predictable point in a sequence of token predictions that can or not have something that resembles a call to what resembles a bluff.

Up to the point it's corrected, the model's representation of what was asked is the best it can do. It has no means to judge itself. Which leads to...

> The real issue isn’t that models make things up; it’s that they don’t clearly signal how confident they are when they do.

Which sounds like exactly what I said, but it's not. Signaling confidence is just a more convincing faux-bluff. Signaling is a side-effect of bluffing, a symptom, not the real thing (which is more related to asessing whoever is on the other side of the conversation).

> Imagining things, seeing problems from the wrong angle, and even fabricating explanations are the seeds of creativity.

I agree with this. However, Newton was not bluffing, he was right and confident about it, and right about being confident about it. It just turns out that his description was of a lesser knowledge resolution than Einsten's.

For this to work, we need lots of "connective tissue" ideas. Roads we can explore freely without being called liars. Things we can say without saying that these things are true or false, without the need for being confident or right, without being assessed directly. This is outside the realm of bluffing or saying useful things. It's quite the opposite.

When people saw comets and described them as dragons in the sky, they were not hallucinating or telling lies, they were preserving some connective tissue idea the best they could, outside of the realm of being right or wrong. This were not bluffs. There were some "truths" about their mistakes, or something useful (they were unadvertedly recording astronomical data, before astronomy existed). Those humans felt that was important, those stories stuck. Can we say the same thing about LLM hallucinations? I don't think we're ready to answer that.

So, yes. Hallucinations could be a feature, but there's a lot missing here.

kerwioru9238492•3mo ago
One issue right now is that in a lot of ML benchmarks models get rewarded for guessing multiple choice questions due to the probability of being right. In addition to that, people have tuned models via RLHF to be very confident because people think confident responses sound good. These two paired together resembles bluffing because models will guess at answers very confidently rather than saying "I don't know".
_wire_•3mo ago
"The problem with Magic 8-ball is lack of context specific confidence in its answers"

This article and attendant comments reveal the AI sector is turning to co-dependent excuse making for a technology that clearly can't live up to its hype.

Get ready for phrenology of AI...

"I am going to need to visit your data center to lay hands on the subject."