frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Was going to share my work

1•hiddenarchitect•2m ago•0 comments

Pitchfork: A devilishly good process manager for developers

https://pitchfork.jdx.dev/
1•ahamez•2m ago•0 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
1•mltvc•6m ago•0 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•7m ago•1 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•7m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•7m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•8m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•9m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•10m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•10m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•11m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•11m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
3•vedantnair•12m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•13m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•17m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•20m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•20m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•27m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•29m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•29m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•30m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•31m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•32m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•32m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•33m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•35m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•35m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•36m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•40m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•41m ago•0 comments
Open in hackernews

All Your Coworkers Are Probabilistic Too

https://scatterarrow.com/content/en/all-your-coworkers-are-probabilistic.html
5•exagolo•2mo ago

Comments

exagolo•2mo ago
When people complain about large language models, I often feel like they're complaining about their coworkers without realizing it...
whobre•2mo ago
At least my coworkers usually don’t hallucinate.
illuminator83•2mo ago
Are you sure? I've been confidently wrong about stuff before. Embarrassing, but it happens.. And I've been working with many people who are sometimes wrong about stuff too. With LLMs you call that "hallucinating" and with people we just call it "lapse in memory", "error in judgment", or "being distracted", or plain "a mistake".
fainpul•2mo ago
True, but people can use classifier words like "I think …" or "Wasn't there this thing …", which allows you to judge their certainty about the answer.

LLMs are always super confident and tell you how it is. Period. You would soon stop asking a coworker who repeatedly behaved like that.

illuminator83•2mo ago
Yeah, for the most part. But I've even had a few instance in which someone was very sure about something and still wrong. Usually not about APIs but rather about stuff that is more work to verify or not quite as timeless. Cache optimization issue or suitability of certain algorithms for some problems even. The world is changing a lot and sometimes people don't notice and stick to stuff that was state-of-the-art a decade ago.

But I think the point of the article is that you should have measure in place which make hallucinations not matter because it will be noticed in CI and tests.

whobre•2mo ago
It’s different. People don’t just invent random API that doesn’t exist. LLM does that all the time.
illuminator83•2mo ago
For the most part, yes. Because people usually read docs and test it on their own.

But I remember a few people long ago telling me confidently how to do this or that in e.g. "git" only to find out during testing that it didn't quite work like that. Or telling me about how some subsystem could be tested. When it didn't work like that at all. Because they operated from memory instead of checking. Or confused one tool/system for another.

LLMs can and should verify their assumptions too. The blog article is about that. That should keep most hallucinations and mistakes people make from doing any real harm.

If you let an LLM do that it won't be much of a problem either. I usually link an LLM to an online source for an API I want to use or tell it just look it up so it is less likely to make such mistakes. It helps.

whobre•2mo ago
Again with people it is a rare occurrence. LLM does that regularly. I just can’t believe anything it says
exagolo•2mo ago
I do agree. I still think that the article articulates a very interesting thought... the better the input for a problem, the better the output. This applies both to LLMs but also for colleagues.