frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Asynchrony is not concurrency

https://kristoff.it/blog/asynchrony-is-not-concurrency/
148•kristoff_it•4h ago•104 comments

Ccusage: A CLI tool for analyzing Claude Code usage from local JSONL files

https://github.com/ryoppippi/ccusage
14•kristianp•39m ago•3 comments

How to write Rust in the Linux kernel: part 3

https://lwn.net/SubscriberLink/1026694/3413f4b43c862629/
15•chmaynard•1h ago•0 comments

Silence Is a Commons by Ivan Illich (1983)

http://www.davidtinapple.com/illich/1983_silence_commons.html
56•entaloneralie•2h ago•8 comments

Shutting Down Clear Linux OS

https://community.clearlinux.org/t/all-good-things-come-to-an-end-shutting-down-clear-linux-os/10716
7•todsacerdoti•17m ago•2 comments

Broadcom to discontinue free Bitnami Helm charts

https://github.com/bitnami/charts/issues/35164
78•mmoogle•4h ago•42 comments

Wii U SDBoot1 Exploit “paid the beak”

https://consolebytes.com/wii-u-sdboot1-exploit-paid-the-beak/
59•sjuut•3h ago•4 comments

Multiplatform Matrix Multiplication Kernels

https://burn.dev/blog/sota-multiplatform-matmul/
44•homarp•4h ago•16 comments

lsr: ls with io_uring

https://rockorager.dev/log/lsr-ls-but-with-io-uring/
289•mpweiher•11h ago•150 comments

EPA says it will eliminate its scientific reseach arm

https://www.nytimes.com/2025/07/18/climate/epa-firings-scientific-research.html
48•anigbrowl•1h ago•12 comments

Valve confirms credit card companies pressured it to delist certain adult games

https://www.pcgamer.com/software/platforms/valve-confirms-credit-card-companies-pressured-it-to-delist-certain-adult-games-from-steam/
137•freedomben•8h ago•129 comments

Meta says it wont sign Europe AI agreement, calling it growth stunting overreach

https://www.cnbc.com/2025/07/18/meta-europe-ai-code.html
81•rntn•6h ago•113 comments

Trying Guix: A Nixer's impressions

https://tazj.in/blog/trying-guix
130•todsacerdoti•3d ago•37 comments

Replication of Quantum Factorisation Records with a VIC-20, an Abacus, and a Dog

https://eprint.iacr.org/2025/1237
57•teddyh•4h ago•13 comments

AI capex is so big that it's affecting economic statistics

https://paulkedrosky.com/honey-ai-capex-ate-the-economy/
179•throw0101c•4h ago•196 comments

Show HN: Molab, a cloud-hosted Marimo notebook workspace

https://molab.marimo.io/notebooks
61•akshayka•5h ago•8 comments

Mango Health (YC W24) Is Hiring

https://www.ycombinator.com/companies/mango-health/jobs/3bjIHus-founding-engineer
1•zachgitt•5h ago

Sage: An atomic bomb kicked off the biggest computing project in history

https://www.ibm.com/history/sage
10•rawgabbit•3d ago•0 comments

The year of peak might and magic

https://www.filfre.net/2025/07/the-year-of-peak-might-and-magic/
68•cybersoyuz•6h ago•34 comments

CP/M creator Gary Kildall's memoirs released as free download

https://spectrum.ieee.org/cpm-creator-gary-kildalls-memoirs-released-as-free-download
226•rbanffy•13h ago•118 comments

Show HN: I built library management app for those who outgrew spreadsheets

https://www.librari.io/
41•hmkoyan•4h ago•27 comments

Cancer DNA is detectable in blood years before diagnosis

https://www.sciencenews.org/article/cancer-tumor-dna-blood-test-screening
150•bookofjoe•5h ago•92 comments

A New Geometry for Einstein's Theory of Relativity

https://www.quantamagazine.org/a-new-geometry-for-einsteins-theory-of-relativity-20250716/
69•jandrewrogers•8h ago•1 comments

How I keep up with AI progress

https://blog.nilenso.com/blog/2025/06/23/how-i-keep-up-with-ai-progress/
162•itzlambda•5h ago•84 comments

Benben: An audio player for the terminal, written in Common Lisp

https://chiselapp.com/user/MistressRemilia/repository/benben/home
44•trocado•3d ago•3 comments

Show HN: Simulating autonomous drone formations

https://github.com/sushrut141/ketu
11•wanderinglight•3d ago•2 comments

Hundred Rabbits – Low-tech living while sailing the world

https://100r.co/site/home.html
213•0xCaponte•4d ago•60 comments

Making a StringBuffer in C, and questioning my sanity

https://briandouglas.ie/string-buffer-c/
23•coneonthefloor•3d ago•12 comments

How to Get Foreign Keys Horribly Wrong

https://hakibenita.com/django-foreign-keys
49•Bogdanp•3d ago•23 comments

Third patient dies from acute liver failure caused by a Sarepta gene therapy

https://www.biocentury.com/article/656520/third-death-from-a-sarepta-gene-therapy
150•randycupertino•5h ago•60 comments
Open in hackernews

A Tiny Boltzmann Machine

https://eoinmurray.info/boltzmann-machine
262•anomancer•2mo ago

Comments

vanderZwan•2mo ago
Lovely explanation!

Just FYI: mouse-scrolling is much too sensitive for some reason (I'm assuming it swipes just fine in mobile contexts, have not checked that). The result is that it jumped from first to last "page" and back whenever I tried scrolling. Luckily keyboard input worked so I could still read the whole thing.

djulo•2mo ago
that's soooo coool
nonrandomstring•2mo ago
This takes me back. 1990, building Boltzman machines and Perceptrons from arrays of void pointers to "neurons" in plain C. What did we use "AI" for back then? To guess the next note in a MIDI melody, and to recognise the shape of a scored note, minim, crotchet, quaver on a 5 x 9 dot grid. 85% accuracy was "good enough" then.
bwestergard•2mo ago
Did the output sound musical?
nonrandomstring•2mo ago
For small values of "music"? Really, no. But tbh, neither have more advanced "AI" composition experiments I've encountered over the years, Markov models, linear predictive coding, genetic/evolutionary algs, rule based systems, and now modern diffusion and transormers... they all lack the "spirit of jazz" [0]

[0] https://i.pinimg.com/originals/e4/84/79/e484792971cc77ddff8f...

gopalv•2mo ago
> recognise the shape of a scored note, minim, crotchet, quaver on a 5 x 9 dot grid

Reading music off a lined page sounds like a fun project, particularly to do it from scratch like 3Blue1Brown's number NN example[1].

Mix with something like Chuck[2] and you can write a completely clientside application with today's tech.

[1] - https://www.3blue1brown.com/lessons/neural-networks

[2] - https://chuck.stanford.edu/

nonrandomstring•2mo ago
Thanks for these links. You're right, I think computer-vision "sight reading" is now a fairly done deal. Very impressive progress in the past 30 years.
bbstats•2mo ago
anyone got an archived link?
tambourine_man•2mo ago
Typo

“They can be used for generating new data that…”

munchler•2mo ago
Another typo (or thinko) in the very first sentence:

"Here we introduce introduction to Boltzmann machines"

croemer•2mo ago
More typos (LLMs are really good at finding these):

"Press the "Run Simulation" button to start traininng the RBM." ("traininng" -> "training")

"...we want to derivce the contrastive divergence algorithm..." ("derivce" -> "derive")

"A visisble layer..." ("visisble" -> "visible")

anomancer•2mo ago
Author here, there was so many typos in first draft that hit hn, all fixed now
nayuki•2mo ago
Oh, this is a neat demo. I took Geoff Hinton's neural networks course in university 15 years ago and he did spend a couple of lectures explaining Boltzmann machines.

> A Restricted Boltzmann Machine is a special case where the visible and hidden neurons are not connected to each other.

This wording is wrong; it implies that visible neurons are not connected to hidden neurons.

The correct wording is: visible neurons are not connected to each other and hidden neurons are not connected to each other.

Alternatively: visible and hidden neurons do not have internal connections within their own type.

CamperBob2•2mo ago
Alternatively: visible and hidden neurons do not have internal connections within their own type.

I'm a bit unclear on how that isn't just an MLP. What's different about a Boltzmann machine?

Edit: never mind, I didn't realize I needed to scroll up to get to the introductory overview.

What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.

nayuki•2mo ago
> I'm a bit unclear on how that isn't just a multi-layer perceptron. What's different about a Boltzmann machine?

In a Boltzmann machine, you alternate back and forth between using visible units to activate hidden units, and then use hidden units to activate visible units.

> What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.

The page should be considered a slideshow that is paged discretely and not scrollable continuously. And there should definitely be no scrolling inertia.

sitkack•2mo ago
Fun article on David Ackley https://news.unm.edu/news/24-nobel-prize-in-physics-cited-gr...

Do check out his T2 Tile Project.

AstroJetson•2mo ago
The key takeaways are that there are lots of people involved with making these breakthroughs.

The value of grad students is often overlooked, they contribute so much and then later on advance the research even more.

Why does America look on research as a waste, when it has move everything so far?

macintux•2mo ago
It's more accurate to say that businesspeople consider research a waste in our quarter-by-quarter investment climate, since it generally doesn't lead to immediate gains.

And our current leadership considers research a threat, since science rarely supports conspiracy theorists or historical revisionism.

sfifs•2mo ago
More charitably, America's current government has an unusually large concentration of business people. Interestingly, they were elected as a vote for change by a population of non business people who were tired of the economic marginalization they suffered when their government consisted largely of non business people not once but twice. It will be interesting to see how this plays out.
richk449•2mo ago
Why do you say that America looks in research as a waste? We spend higher percentage of gdp on R&D than just about any other country in the world:

https://en.wikipedia.org/wiki/List_of_sovereign_states_by_re...

itissid•2mo ago
IIUC, we need gibbs sampling(to compute the weight updates) instead of using the gradient based forward and backward passes with today's NNetworks that we are used to. Any one understand why that is so?
ebolyen•2mo ago
Not an expert, but I have a bit of formal training on Bayesian stuff which handles similar problems.

Usually Gibbs is used when there's no directly straight-forward gradient (or when you are interested in reproducing the distribution itself, rather than a point estimate), but you do have some marginal/conditional likelihoods which are simple to sample from.

Since each visible node depends on each hidden node and each hidden node effects all visible nodes, the gradient ends up being very messy, so its much simpler to use Gibbs sampling to adjust based on marginal likelihoods.

oac•2mo ago
I might be mistaken, but I think this is partly because of the undirected structure of RBMs, so you can't build a computational graph in the same way as with feed-forward networks.
alimw•2mo ago
By "undirected structure" I assume you refer to the presence of cycles in the graph? I was taught to call such networks "recurrent" but it seems that that term has evolved to mean something slightly different. Anyway yeah, because of the cycles Gibbs sampling is key to the network's operation. One still employs gradient descent during training, but the procedure to calculate the gradient itself involves Gibbs sampling.

Edit: Actually was talking about the General Boltzmann Machine. For the Restricted Boltzmann Machine an approximation has been assumed which obviates the need for full Gibbs sampling during training. Then (quoting the article, emphasis mine) "after training, it can sample new data from the learned distribution using Gibbs sampling."

btickell•2mo ago
Thought I'd weigh in here as well, I believe Gibbs sampling is being used as a way to approximate the expectation over the model distribution. This value is required to compute the gradient of the log likelihood but integrating the distribution is intractable.

This is done in a similar way as you may use MCMC to draw a representative sample from a VAE. In the deep learning formulation of a neural network the gradient is estimated over batches of the dataset rather than over an explicitly modeled probability distribution.

pawanjswal•2mo ago
Love how this breaks down Boltzmann Machines—finally makes this 'energy-based model' stuff click!
BigParm•2mo ago
That font with a bit of margin looks fantastic on my phone specifically. Really nailing the minimalist look. What font is that?
mac9•2mo ago
"font-family: ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";"

from the css so odds are it's whatever your browser or OS's default sans font is, in my case it's SF Pro which is an Apple font though it may vary if you use a non Apple device.

nickvec•2mo ago
> Here we introduce introduction to Boltzmann machines and present a Tiny Restricted Boltzmann Machine that runs in the browser.

nit: should "introduction" be omitted?

antidumbass•2mo ago
The section after the interactive diagrams has no left padding and thus runs off the screen on iOS.
rollulus•2mo ago
Now the real question: is it you enjoying that nice page or is it a Boltzmann Brain?

https://en.m.wikipedia.org/wiki/Boltzmann_brain

alganet•2mo ago
It doesn't matter.

It's Decartes demon all over again. Problem solved centuries ago. You can skin it however you want, it's the same problem.

taneq•2mo ago
We can’t really discuss Descartes without first explaining the horse.
alganet•2mo ago
I don't think there is anything to discuss about Decartes.
taneq•2mo ago
https://www.reddit.com/r/Jokes/comments/8bmuk8/a_horse_walks...

;)

nickvec•2mo ago
Great site! Would be cool to be able to adjust the speed at which the simulation runs as well.
thingamarobert•2mo ago
This is very well made, and so nostalgic to me! My whole PhD between 2012-16 was based on RBMs and I learned so much about generative ML through these models. Research has come so far and one doesn't hear much about them these days but they were really at the heart of the "AI Spring" back then.
tomrod•2mo ago
Great read!

One nit, a misspelling in the Appendix: derivce -> derive

oac•2mo ago
Nice and clean explanation!

It brings up a lot of memories! Shameless plug: I made a visualization of an RBM being trained years ago: https://www.youtube.com/watch?v=lKAy_NONg3g

dr_dshiv•2mo ago
My understanding is that the Harmonium (Smolensky) was the first restricted Boltzmann machine, but maximized “harmony” instead of minimizing “energy.” When Smolensky, Hinton and Rummelhart collaborated, they instead called it “goodness of fit.”

The harmonium paper [1] is a really nice read. Hinton obviously became the superstar and Smolensky wrote long books about linguistics.

Anyone know more about this history?

[1] https://stanford.edu/~jlmcc/papers/PDP/Volume%201/Chap6_PDP8...

Nevermark•2mo ago
I mistook the title for "A Tiny Boltzmann Brain"! [0]

My own natural mind immediately solved the conundrum. Surely this was a case where a very small model was given randomly generated weights and then tested to see if it actually did something useful!

After all, the smaller the model, the more likely simple random generation can produce something interesting, relative to its size.

I stand corrected, but not discouraged!

I propose a new class of model, the "Unbiased-Architecture Instant Boltzmann Model" (UA-IBM).

One day we will have quantum computers large enough to simply set up the whole dataset as a classical constraint on a model defined with N serialized values, representing all the parameters and architecture settings. Then let a quantum system with N qubits take one inference step over all the classical samples, with all possible parameters and architectures in quantum superposition, and then reduce the result to return the best (or near best) model's parametesr and architecture in classical form.

Anyone have a few qubits laying around that want to give this a shot? (The irony that everything is quantum and yet so slippery we can hardly put any of it to work yet.

(Sci-fi story premise: the totally possible case of an alien species that evolved one-off quantum sensor, which evolved into a whole quantum sensory system, then a nervous system, and subsequently full quantum intelligence out of the gate. What kind of society and technological trajectory would they have? Hopefully they are in close orbit around a black hole, so the impact of their explosive progress has not threatened us yet. And then one day, they escape their gravity well, and ...)

[0] https://en.wikipedia.org/wiki/Boltzmann_brain

immibis•2mo ago
That isn't how quantum computers work.
Nevermark•2mo ago
My understanding, which is far from certain, is that problems like this (try a large combination of solutions, cooperate between superpositions to identify the best, then set up the quantum outputs so when they are classically sampled, the best answer is the most likely sample obtained) are solved in four stages:

1. N^2 potential solutions encoded in superposition across N qubuts.

2. Each superposition goes through the quantum version of a normal inference pass, using each superpositions weights to iteratively process all the classical data, then calculates performance. (This is all done without any classical sampling.)

3. Cross-superposition communication that results in agreement on the desired version. This is the weakest part of my knowledge. I know that is an operation type that exists, but I don't know how circuits implement it. But the number of bits being operated on is small, just a performance measure. (This is also done without any classical sampling.)

4. Then the output is sampled to get classical values. This requires N * log2(N) circuit complexity, where N is the number of bits defining the chosen solution, i.e. parameters and architectural settings. This can be a lot of hardware, obviously, perhaps more than the rest of the hardware, given N will be very large.

Don't take anything I say here for granted. I have designed parts of such a circuit, using ideal quantum gates in theory, but not all of it. I am not an expert, but I believe that every step here is well understood by others.

The downside relative to other approaches: It does a complete search of the entire solutions space directly, so the result comes from reducing N^2 superpositions across N qubits, where for interesting models the N can be very large (billions of parameters x parameter bit width), to get N qubits. No efficiencies are obtained from gradient information or any other heuristic. This is the brute force method. So an upper bound of hardware requirements.

Another disadvantage, is since the solution was not obtained by following a gradient, the best solution might not be robust. Meaning, it might be a kind of vertex between robust solutions that manages to do best on the designed data, but small changes of input samples from the design data might not generalize well. This is less likely to be a problem if smoothing/regularization information is taken into account account for determining each superpositions performance.

ithkuil•2mo ago
Poor quantum beings. They don't have access to a computation model that exceeds the speeds of their own thoughts and they are forever doomed to be waiting a long time for computations to happen
anomancer•2mo ago
Author here! Thanks for all the comments, didn't expect this to hit the front page.

Cleaning up the abundance of typos, margin, and scroll issues now, thanks for pointing them out.

anomancer•2mo ago
Have cleaned up the typos, and it should look much better on mobile now