frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
72•valyala•3h ago•15 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•11 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
30•zdw•3d ago•2 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
121•valyala•3h ago•92 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
82•mellosouls•6h ago•156 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
40•surprisetalk•3h ago•49 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
142•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
92•vinhnx•6h ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
849•klaussilveira•23h ago•255 comments

First Proof

https://arxiv.org/abs/2602.05192
63•samasblack•6h ago•51 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1088•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
60•thelok•5h ago•9 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
91•onurkanbkrc•8h ago•5 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
228•jesperordrup•13h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
512•theblazehen•3d ago•190 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
319•ColinWright•3h ago•380 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
12•languid-photic•3d ago•4 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
249•alainrk•8h ago•403 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
607•nar001•7h ago•267 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
25•momciloo•3h ago•4 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
34•marklit•5d ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
177•1vuio0pswjnm7•10h ago•247 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
46•rbanffy•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
123•videotopia•4d ago•37 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•4 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
91•speckx•4d ago•104 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
28•sandGorgon•2d ago•14 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
208•limoce•4d ago•115 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
283•isitcontent•23h ago•38 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
564•todsacerdoti•1d ago•275 comments
Open in hackernews

Fighting Brandolini's Law with Sampling

https://brady.fyi/fact-checking/
30•h-bradio•6mo ago

Comments

ygritte•6mo ago
Donal Trump was actually not topmost liar at the time of sampling, but only 2nd place. Color me surprised.
prasadjoglekar•6mo ago
Well, "fact checkers" like Politifact are precisely what are considered biased themselves. Sampling from a biased dataset still shows the same bias.

https://dukespace.lib.duke.edu/items/8f9a6f3b-efd7-46f3-b4be...

You may be aligned with the alleged or real partisanship of Politifact, so to you there's no problem here. But team Harris and Buttigieg lost the election.

Hence these consequences (from Wikipedia):

In January 2025, Mark Zuckerberg announced an end to Meta's eight-year partnership with PolitiFact, claiming that "fact checkers have just been too politically biased."[62][63

noelwelsh•6mo ago
This is a great example of the issue the blog post is addressing, namely:

> The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

The play book is:

1. Set an impossible standard (an undefined "unbiased" fact checker)

2. When impossible standard cannot be reached, throw toys out of the pram

Meanwhile, egregious levels of bullshit now go unchallenged.

brookst•6mo ago
Yeah it’s just the seat belt fallacy: seatbelts are useless because people still die in car crashes.

Somehow our whole society has fallen for the “unless you can point to a perfect saint who has never done any wrong, we might as well be led by active criminals” pitch. It’s so nihilistic.

littlestymaar•6mo ago
> In January 2025, Mark Zuckerberg announced an end to Meta's eight-year partnership with PolitiFact, claiming that "fact checkers have just been too politically biased."[62][63

No relationship with the fact that Trump became president again in Jan 2025 with Zuckerberg giving money to his inauguration, obviously.

ImPostingOnHN•6mo ago
Most everything is "considered biased" by some people. In this case, Zuckerberg and the Bain employee who authored that report are indeed people -- 2 out of billions.

Consider an alternative framing, "fact checkers like Politifact are precisely what are considered UNbiased". It is at least as true (because at least 2 people consider it to be so).

Given that framing alternative to yours: what, if anything, should we do anything about the situation?

How do you think framing, rather than substance, affects that discussion?

alanbernstein•6mo ago
The "falsiness distribution" by itself is not capable of answering this kind of question. Imagine a politician who speaks just one statement, a "pants on fire" lie. They immediately reach the top liar spot.

The distribution also leaves out the significance and the reach of the statements.

Your statement is about as meaningful as the "fastest growing <whatever>" trick. E.g. growing from 0->1 user is infinite growth, so wins fastest growing immediately.

superxpro12•6mo ago
If this were ESPN or similar, they would say "min 50 games" or something to sort the outliers (heh).
h-bradio•6mo ago
OP here -- thanks for your reply! You're exactly right! I included the NYT/PolitiFact graph at the top as an example of that problem. In the second half of the post, I propose what I think could work a little better (sampling comparable speeches and fact-checking the entire text).
MarkusQ•6mo ago
Both this and the underlying system of fact checking are ignoring the elephant in the room: we have no direct access to the truth. Instead, all we can do is check for consistency. This can be either internal (if I say "two is even" and later "two is odd" I must have lied at least once) or with external source (e.g. look it up somewhere, or ask an expert).

The best external source is reality, if you can corner it with a well designed experiment; this is, unfortunately, really, really hard.

Established theories are also good (but, as history has shown, can be wrong). The biggest problem with theory-based fact checking is that our best theories generally come in pairs that make conflicting claims or are otherwise inconsistent. Plus, the proper application of theories can often be a minefield of subtlety. So this comes down to a choice of "pick the theory that gives the answer you like" or "trust the experts" (e.g. argument by authority).

That leaves us with the most popular option: compare the claim against some consensus (and it happens to be correct). This is generally easy, and works great when there _is_ a consensus, which leads us to overestimate its reliability. And thus we waste years exploring amyloid beta plaques, looking for dark matter, teaching whole-word reading, and so on.

It would be great if we had an easy way to tell who's lying, but in fact what we've got is a lot of ways to tell who we agree with and who we don't, and we don't always agree with each other on that.

h-bradio•6mo ago
OP here! Thanks for calling out this important point. As I fact-checked each claim, I was surprised at how many of the checks were "does the paper he's citing say what he says it does?" You can see them here: https://fact-check.brady.fyi/documents/3f744445-0703-4baf-89...
MarkusQ•6mo ago
Yeah. And that's really important; If someone makes a correct claim by accident, say they misread a paper that incorrectly claims X as correctly claiming not-X, we shouldn't consider it evidence that they are trustworthy or honest, just lucky.

But then you have cases where someone correctly cites a source that they know to be incorrect (or at least plausibly should know). This is commonly done when flawed studies are funded specifically so they can be cited. This is arguably even more egregious lying, yet would pass a consistency based "fact check".

Likewise, the factual claim ("eight out of ten doctors surveyed recommend smoking brand-x") can be true while the implication is false.

In short, I'm not claiming such checks can't catch liars (they can), just that passing such checks doesn't mean they were telling the truth or what they said or implied was correct.

poulpy123•6mo ago
Thinking you can objectively quantify the degree a politician is lying is a mistake. Obvious, open, fact-checkable and relevant lies are the minority.
h-bradio•6mo ago
OP here! Going into it, I definitely agreed and thought that easily fact-checkable claims would be the minority. But as I worked, I found that many of his claims were "this paper says this". So checking the claim was as simple as checking "does the paper he's citing say what he says it does?" You can see them here: https://fact-check.brady.fyi/documents/3f744445-0703-4baf-89...