frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•2m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
1•baruchel•4m ago•0 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•birdculture•6m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•7m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
1•raleobob•10m ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•11m ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
1•jingkai_he•14m ago•0 comments

Show HN: A2A Protocol – Infrastructure for an Agent-to-Agent Economy

1•swimmingkiim•18m ago•1 comments

Drinking More Water Can Boost Your Energy

https://www.verywellhealth.com/can-drinking-water-boost-energy-11891522
1•wjb3•21m ago•0 comments

Proving Laderman's 3x3 Matrix Multiplication Is Locally Optimal via SMT Solvers

https://zenodo.org/records/18514533
1•DarenWatson•23m ago•0 comments

Fire may have altered human DNA

https://www.popsci.com/science/fire-alter-human-dna/
3•wjb3•24m ago•1 comments

"Compiled" Specs

https://deepclause.substack.com/p/compiled-specs
1•schmuhblaster•29m ago•0 comments

The Next Big Language (2007) by Steve Yegge

https://steve-yegge.blogspot.com/2007/02/next-big-language.html?2026
1•cryptoz•30m ago•0 comments

Open-Weight Models Are Getting Serious: GLM 4.7 vs. MiniMax M2.1

https://blog.kilo.ai/p/open-weight-models-are-getting-serious
4•ms7892•40m ago•0 comments

Using AI for Code Reviews: What Works, What Doesn't, and Why

https://entelligence.ai/blogs/entelligence-ai-in-cli
3•Arindam1729•40m ago•0 comments

Show HN: Solnix – an early-stage experimental programming language

https://www.solnix-lang.org/
2•maheshbhatiya•40m ago•0 comments

DoNotNotify is now Open Source

https://donotnotify.com/opensource.html
5•awaaz•42m ago•2 comments

The British Empire's Brothels

https://www.historytoday.com/archive/feature/british-empires-brothels
2•pepys•42m ago•0 comments

What rare disease AI teaches us about longitudinal health

https://myaether.live/blog/what-rare-disease-ai-teaches-us-about-longitudinal-health
2•takmak007•47m ago•0 comments

The Brand Savior Complex and the New Age of Self Censorship

https://thesocialjuice.substack.com/p/the-brand-savior-complex-and-the
2•jaskaransainiz•49m ago•0 comments

Show HN: A Prompting Framework for Non-Vibe-Coders

https://github.com/No3371/projex
2•3371•50m ago•0 comments

Kilroy is a local-first "software factory" CLI

https://github.com/danshapiro/kilroy
2•ukuina•1h ago•0 comments

Mathscapes – Jan 2026 [pdf]

https://momath.org/wp-content/uploads/2026/02/1.-Mathscapes-January-2026-with-Solution.pdf
1•vismit2000•1h ago•0 comments

80386 Barrel Shifter

https://nand2mario.github.io/posts/2026/80386_barrel_shifter/
2•jamesbowman•1h ago•0 comments

Training Foundation Models Directly on Human Brain Data

https://arxiv.org/abs/2601.12053
1•helloplanets•1h ago•0 comments

Web Speech API on HN Threads

https://toulas.ch/projects/hn-readaloud/
1•etoulas•1h ago•0 comments

ArtisanForge: Learn Laravel through a gamified RPG adventure – 100% free

https://artisanforge.online/
2•grazulex•1h ago•1 comments

Your phone edits all your photos with AI – is it changing your view of reality?

https://www.bbc.com/future/article/20260203-the-ai-that-quietly-edits-all-of-your-photos
1•breve•1h ago•0 comments

DStack, a small Bash tool for managing Docker Compose projects

https://github.com/KyanJeuring/dstack
3•kppjeuring•1h ago•1 comments

Hop – Fast SSH connection manager with TUI dashboard

https://github.com/danmartuszewski/hop
2•danmartuszewski•1h ago•1 comments
Open in hackernews

Good if make prior after data instead of before

https://www.lesswrong.com/posts/JAA2cLFH7rLGNCeCo/good-if-make-prior-after-data-instead-of-before
13•surprisetalk•1w ago

Comments

cracki•6d ago
Maybe if I read more of that site or that author, or I wasn't close to falling asleep, this could have made sense to me. It didn't.

The title certainly made me wonder if I was having a stroke. I am now sure I didn't.

Feel free to turn my statements into a bunch of probabilities.

lsaferite•6d ago
Good to know I'm not alone. I'm also tired, so I guess that could be it.
gavmor•6d ago
"Prior" is an adjective.
dataflow•6d ago
In math it's a well-known shorthand for "prior distribution", a noun. https://en.wikipedia.org/wiki/Prior_probability
4sak3n•6d ago
Adjectives can be used as nouns in informal speech
two_handfuls•6d ago
Terrible title but good article.
overtone1000•6d ago
I know that this is a great discussion on Bayesian reasoning, but, honestly, I'm probably just going to use it to rebuff my friends who occasionally bring up aliens.
Mae_soph•6d ago
This article reveals a fundamental misunderstanding of Bayesian statistics by the author, when they say "for the sake of simplicity, let's call it a wash and assume the odds are the same". Because the odds ratio is key in statistics.

You cannot just go "this chance is very small and so is this chance therefore we can assume them to be similar". That's just wrong. The chance that the data we see happens if there are aliens is a lot smaller than the chance of the data given that are none. Yes, both are very small but that does not mean the odds ratio can be assumed to be 1. As the author illustrates, this incorrect reasoning breaks the usefulness of Bayesian statistics.

As for an example let's say that you claim to be using magic to win the lottery, which I don't believe. Now, the lottery happens and the winning number is 4529640, which is not yours. The probability of that number winning is small regardless of these initial hypotheses. If we follow the reasoning in the article we may say that that means because both chances are small this gives us no information on these hypotheses, which is clearly wrong.