frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•41s ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
1•downboots•48s ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
1•whack•1m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•1m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•2m ago•0 comments

The AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
1•geox•4m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•5m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
1•jerpint•5m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•7m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•10m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•10m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•12m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•13m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•13m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•13m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•14m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•15m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•16m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•17m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•21m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•21m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•23m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•23m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•27m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•30m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•31m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•31m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•32m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•33m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•35m ago•0 comments
Open in hackernews

All Your Coworkers Are Probabilistic Too

https://scatterarrow.com/content/en/all-your-coworkers-are-probabilistic.html
5•exagolo•2mo ago

Comments

exagolo•2mo ago
When people complain about large language models, I often feel like they're complaining about their coworkers without realizing it...
whobre•2mo ago
At least my coworkers usually don’t hallucinate.
illuminator83•2mo ago
Are you sure? I've been confidently wrong about stuff before. Embarrassing, but it happens.. And I've been working with many people who are sometimes wrong about stuff too. With LLMs you call that "hallucinating" and with people we just call it "lapse in memory", "error in judgment", or "being distracted", or plain "a mistake".
fainpul•2mo ago
True, but people can use classifier words like "I think …" or "Wasn't there this thing …", which allows you to judge their certainty about the answer.

LLMs are always super confident and tell you how it is. Period. You would soon stop asking a coworker who repeatedly behaved like that.

illuminator83•2mo ago
Yeah, for the most part. But I've even had a few instance in which someone was very sure about something and still wrong. Usually not about APIs but rather about stuff that is more work to verify or not quite as timeless. Cache optimization issue or suitability of certain algorithms for some problems even. The world is changing a lot and sometimes people don't notice and stick to stuff that was state-of-the-art a decade ago.

But I think the point of the article is that you should have measure in place which make hallucinations not matter because it will be noticed in CI and tests.

whobre•2mo ago
It’s different. People don’t just invent random API that doesn’t exist. LLM does that all the time.
illuminator83•2mo ago
For the most part, yes. Because people usually read docs and test it on their own.

But I remember a few people long ago telling me confidently how to do this or that in e.g. "git" only to find out during testing that it didn't quite work like that. Or telling me about how some subsystem could be tested. When it didn't work like that at all. Because they operated from memory instead of checking. Or confused one tool/system for another.

LLMs can and should verify their assumptions too. The blog article is about that. That should keep most hallucinations and mistakes people make from doing any real harm.

If you let an LLM do that it won't be much of a problem either. I usually link an LLM to an online source for an API I want to use or tell it just look it up so it is less likely to make such mistakes. It helps.

whobre•2mo ago
Again with people it is a rare occurrence. LLM does that regularly. I just can’t believe anything it says
exagolo•2mo ago
I do agree. I still think that the article articulates a very interesting thought... the better the input for a problem, the better the output. This applies both to LLMs but also for colleagues.