frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Make macOS consistently bad (unironically)

https://lr0.org/blog/p/macos/
233•speckx•4h ago•157 comments

Why are executives enamored with AI, but ICs aren't?

https://johnjwang.com/post/2026/03/27/why-are-executives-enabled-with-ai-but-ics-arent/
13•johnjwang•16m ago•6 comments

Velxio 2.0 – Emulate Arduino, ESP32, and Raspberry Pi 3 in the Browser

https://github.com/davidmonterocrespo24/velxio
52•dmcrespo•2h ago•14 comments

Anatomy of the .claude/ folder

https://blog.dailydoseofds.com/p/anatomy-of-the-claude-folder
339•freedomben•8h ago•177 comments

ISBN Visualization – Annas Archive

https://annas-archive.gd/isbn-visualization?
65•Cider9986•3h ago•10 comments

Nashville library launches Memory Lab for digitizing home movies

https://www.axios.com/local/nashville/2026/03/16/nashville-library-digitize-home-movies
80•toomuchtodo•3d ago•17 comments

Telnyx package compromised on PyPI

https://telnyx.com/resources/telnyx-python-sdk-supply-chain-security-notice-march-2026
72•ramimac•14h ago•80 comments

DOJ confirms FBI Director Kash Patel's personal email was hacked

https://arstechnica.com/tech-policy/2026/03/doj-confirms-fbi-director-kash-patels-personal-email-...
82•sebastian_z•1h ago•43 comments

LG's new 1Hz display is the secret behind a new laptop's battery life

https://www.pcworld.com/article/3096432/lgs-new-1hz-display-is-the-secret-behind-a-new-laptops-ba...
77•robotnikman•4d ago•25 comments

Installing a Let's Encrypt TLS certificate on a Brother printer with Certbot

https://owltec.ca/Other/Installing+a+Let%27s+Encrypt+TLS+certificate+on+a+Brother+printer+automat...
176•8organicbits•9h ago•47 comments

The Future of SCIP

https://sourcegraph.com/blog/the-future-of-scip
23•jdorfman•7h ago•10 comments

Explore the Hidden World of Sand

https://magnifiedsand.com/
159•RAAx707•4d ago•33 comments

Automatically generate all 3D print files for organizing a drawer

https://geniecrate.com/
13•woktalk•2d ago•7 comments

Building FireStriker: Making Civic Tech Free

https://firestriker.org/blog/building-firestriker-why-im-making-civic-tech-free
80•noleary•1d ago•18 comments

Matlab Alternatives 2026: Benchmarks, GPU, Browser and Compatibility Compared

https://runmat.com/blog/free-matlab-alternatives
4•bauta-steen•2d ago•1 comments

Meow.camera

https://meow.camera/#4258783365322591678
168•surprisetalk•8h ago•37 comments

Show HN: Twitch Roulette – Find live streamers who need views the most

https://twitchroulette.net/
8•ellg•1h ago•2 comments

Embracing Bayesian methods in clinical trials

https://jamanetwork.com/journals/jama/fullarticle/2847011
71•nextos•3d ago•7 comments

‘Energy independence feels practical’: Europeans building mini solar farms

https://www.euronews.com/2026/03/26/suddenly-energy-independence-feels-practical-europeans-are-bu...
193•vrganj•14h ago•170 comments

Desk for people who work at home with a cat

https://soranews24.com/2026/03/27/japan-now-has-a-special-desk-for-people-who-work-at-home-with-a...
313•zdw•8h ago•123 comments

Capability-Based Security for Redox: Namespace and CWD as Capabilities

https://www.redox-os.org/news/nlnet-cap-nsmgr-cwd/
21•ejplatzer•4h ago•1 comments

People inside Microsoft are fighting to drop mandatory Microsoft Account

https://www.windowscentral.com/microsoft/windows-11/people-inside-microsoft-are-fighting-to-drop-...
469•breve•9h ago•378 comments

Slovenia becomes first EU country to introduce fuel rationing

https://www.bbc.com/news/articles/c77m4zx6zvmo
108•measurablefunc•2h ago•149 comments

Hold on to Your Hardware

https://xn--gckvb8fzb.com/hold-on-to-your-hardware/
552•LucidLynx•13h ago•450 comments

21,864 Yugoslavian .yu domains

https://jacobfilipp.com/yu/
69•freediver•2d ago•87 comments

Gzip decompression in 250 lines of Rust

https://iev.ee/blog/gzip-decompression-in-250-lines-of-rust/
107•vismit2000•3d ago•38 comments

Should QA exist?

https://www.rubick.com/should-qa-exist/
83•PretzelFisch•13h ago•124 comments

Everything old is new again: memory optimization

https://nibblestew.blogspot.com/2026/03/everything-old-is-new-again-memory.html
170•ibobev•4d ago•123 comments

Solving Semantle with the Wrong Embeddings

https://victoriaritvo.com/blog/robust-semantle-solver/
10•evakhoury•3d ago•0 comments

EMachines never obsolete PCs: More than a meme

https://dfarq.homeip.net/emachines-never-obsolete-pcs-more-than-a-meme/
58•zdw•3d ago•37 comments
Open in hackernews

A Tiny Boltzmann Machine

https://eoinmurray.info/boltzmann-machine
262•anomancer•10mo ago

Comments

vanderZwan•10mo ago
Lovely explanation!

Just FYI: mouse-scrolling is much too sensitive for some reason (I'm assuming it swipes just fine in mobile contexts, have not checked that). The result is that it jumped from first to last "page" and back whenever I tried scrolling. Luckily keyboard input worked so I could still read the whole thing.

djulo•10mo ago
that's soooo coool
nonrandomstring•10mo ago
This takes me back. 1990, building Boltzman machines and Perceptrons from arrays of void pointers to "neurons" in plain C. What did we use "AI" for back then? To guess the next note in a MIDI melody, and to recognise the shape of a scored note, minim, crotchet, quaver on a 5 x 9 dot grid. 85% accuracy was "good enough" then.
bwestergard•10mo ago
Did the output sound musical?
nonrandomstring•10mo ago
For small values of "music"? Really, no. But tbh, neither have more advanced "AI" composition experiments I've encountered over the years, Markov models, linear predictive coding, genetic/evolutionary algs, rule based systems, and now modern diffusion and transormers... they all lack the "spirit of jazz" [0]

[0] https://i.pinimg.com/originals/e4/84/79/e484792971cc77ddff8f...

gopalv•10mo ago
> recognise the shape of a scored note, minim, crotchet, quaver on a 5 x 9 dot grid

Reading music off a lined page sounds like a fun project, particularly to do it from scratch like 3Blue1Brown's number NN example[1].

Mix with something like Chuck[2] and you can write a completely clientside application with today's tech.

[1] - https://www.3blue1brown.com/lessons/neural-networks

[2] - https://chuck.stanford.edu/

nonrandomstring•10mo ago
Thanks for these links. You're right, I think computer-vision "sight reading" is now a fairly done deal. Very impressive progress in the past 30 years.
bbstats•10mo ago
anyone got an archived link?
tambourine_man•10mo ago
Typo

“They can be used for generating new data that…”

munchler•10mo ago
Another typo (or thinko) in the very first sentence:

"Here we introduce introduction to Boltzmann machines"

croemer•10mo ago
More typos (LLMs are really good at finding these):

"Press the "Run Simulation" button to start traininng the RBM." ("traininng" -> "training")

"...we want to derivce the contrastive divergence algorithm..." ("derivce" -> "derive")

"A visisble layer..." ("visisble" -> "visible")

anomancer•10mo ago
Author here, there was so many typos in first draft that hit hn, all fixed now
nayuki•10mo ago
Oh, this is a neat demo. I took Geoff Hinton's neural networks course in university 15 years ago and he did spend a couple of lectures explaining Boltzmann machines.

> A Restricted Boltzmann Machine is a special case where the visible and hidden neurons are not connected to each other.

This wording is wrong; it implies that visible neurons are not connected to hidden neurons.

The correct wording is: visible neurons are not connected to each other and hidden neurons are not connected to each other.

Alternatively: visible and hidden neurons do not have internal connections within their own type.

CamperBob2•10mo ago
Alternatively: visible and hidden neurons do not have internal connections within their own type.

I'm a bit unclear on how that isn't just an MLP. What's different about a Boltzmann machine?

Edit: never mind, I didn't realize I needed to scroll up to get to the introductory overview.

What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.

nayuki•10mo ago
> I'm a bit unclear on how that isn't just a multi-layer perceptron. What's different about a Boltzmann machine?

In a Boltzmann machine, you alternate back and forth between using visible units to activate hidden units, and then use hidden units to activate visible units.

> What 0xTJ's [flagged][dead] comment says about it being undesirable to hijack or otherwise attempt to reinvent scrolling is spot on.

The page should be considered a slideshow that is paged discretely and not scrollable continuously. And there should definitely be no scrolling inertia.

sitkack•10mo ago
Fun article on David Ackley https://news.unm.edu/news/24-nobel-prize-in-physics-cited-gr...

Do check out his T2 Tile Project.

AstroJetson•10mo ago
The key takeaways are that there are lots of people involved with making these breakthroughs.

The value of grad students is often overlooked, they contribute so much and then later on advance the research even more.

Why does America look on research as a waste, when it has move everything so far?

macintux•10mo ago
It's more accurate to say that businesspeople consider research a waste in our quarter-by-quarter investment climate, since it generally doesn't lead to immediate gains.

And our current leadership considers research a threat, since science rarely supports conspiracy theorists or historical revisionism.

sfifs•10mo ago
More charitably, America's current government has an unusually large concentration of business people. Interestingly, they were elected as a vote for change by a population of non business people who were tired of the economic marginalization they suffered when their government consisted largely of non business people not once but twice. It will be interesting to see how this plays out.
richk449•10mo ago
Why do you say that America looks in research as a waste? We spend higher percentage of gdp on R&D than just about any other country in the world:

https://en.wikipedia.org/wiki/List_of_sovereign_states_by_re...

itissid•10mo ago
IIUC, we need gibbs sampling(to compute the weight updates) instead of using the gradient based forward and backward passes with today's NNetworks that we are used to. Any one understand why that is so?
ebolyen•10mo ago
Not an expert, but I have a bit of formal training on Bayesian stuff which handles similar problems.

Usually Gibbs is used when there's no directly straight-forward gradient (or when you are interested in reproducing the distribution itself, rather than a point estimate), but you do have some marginal/conditional likelihoods which are simple to sample from.

Since each visible node depends on each hidden node and each hidden node effects all visible nodes, the gradient ends up being very messy, so its much simpler to use Gibbs sampling to adjust based on marginal likelihoods.

oac•10mo ago
I might be mistaken, but I think this is partly because of the undirected structure of RBMs, so you can't build a computational graph in the same way as with feed-forward networks.
alimw•10mo ago
By "undirected structure" I assume you refer to the presence of cycles in the graph? I was taught to call such networks "recurrent" but it seems that that term has evolved to mean something slightly different. Anyway yeah, because of the cycles Gibbs sampling is key to the network's operation. One still employs gradient descent during training, but the procedure to calculate the gradient itself involves Gibbs sampling.

Edit: Actually was talking about the General Boltzmann Machine. For the Restricted Boltzmann Machine an approximation has been assumed which obviates the need for full Gibbs sampling during training. Then (quoting the article, emphasis mine) "after training, it can sample new data from the learned distribution using Gibbs sampling."

btickell•10mo ago
Thought I'd weigh in here as well, I believe Gibbs sampling is being used as a way to approximate the expectation over the model distribution. This value is required to compute the gradient of the log likelihood but integrating the distribution is intractable.

This is done in a similar way as you may use MCMC to draw a representative sample from a VAE. In the deep learning formulation of a neural network the gradient is estimated over batches of the dataset rather than over an explicitly modeled probability distribution.

pawanjswal•10mo ago
Love how this breaks down Boltzmann Machines—finally makes this 'energy-based model' stuff click!
BigParm•10mo ago
That font with a bit of margin looks fantastic on my phone specifically. Really nailing the minimalist look. What font is that?
mac9•10mo ago
"font-family: ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";"

from the css so odds are it's whatever your browser or OS's default sans font is, in my case it's SF Pro which is an Apple font though it may vary if you use a non Apple device.

nickvec•10mo ago
> Here we introduce introduction to Boltzmann machines and present a Tiny Restricted Boltzmann Machine that runs in the browser.

nit: should "introduction" be omitted?

antidumbass•10mo ago
The section after the interactive diagrams has no left padding and thus runs off the screen on iOS.
rollulus•10mo ago
Now the real question: is it you enjoying that nice page or is it a Boltzmann Brain?

https://en.m.wikipedia.org/wiki/Boltzmann_brain

alganet•10mo ago
It doesn't matter.

It's Decartes demon all over again. Problem solved centuries ago. You can skin it however you want, it's the same problem.

taneq•10mo ago
We can’t really discuss Descartes without first explaining the horse.
alganet•10mo ago
I don't think there is anything to discuss about Decartes.
taneq•10mo ago
https://www.reddit.com/r/Jokes/comments/8bmuk8/a_horse_walks...

;)

nickvec•10mo ago
Great site! Would be cool to be able to adjust the speed at which the simulation runs as well.
thingamarobert•10mo ago
This is very well made, and so nostalgic to me! My whole PhD between 2012-16 was based on RBMs and I learned so much about generative ML through these models. Research has come so far and one doesn't hear much about them these days but they were really at the heart of the "AI Spring" back then.
tomrod•10mo ago
Great read!

One nit, a misspelling in the Appendix: derivce -> derive

oac•10mo ago
Nice and clean explanation!

It brings up a lot of memories! Shameless plug: I made a visualization of an RBM being trained years ago: https://www.youtube.com/watch?v=lKAy_NONg3g

dr_dshiv•10mo ago
My understanding is that the Harmonium (Smolensky) was the first restricted Boltzmann machine, but maximized “harmony” instead of minimizing “energy.” When Smolensky, Hinton and Rummelhart collaborated, they instead called it “goodness of fit.”

The harmonium paper [1] is a really nice read. Hinton obviously became the superstar and Smolensky wrote long books about linguistics.

Anyone know more about this history?

[1] https://stanford.edu/~jlmcc/papers/PDP/Volume%201/Chap6_PDP8...

Nevermark•10mo ago
I mistook the title for "A Tiny Boltzmann Brain"! [0]

My own natural mind immediately solved the conundrum. Surely this was a case where a very small model was given randomly generated weights and then tested to see if it actually did something useful!

After all, the smaller the model, the more likely simple random generation can produce something interesting, relative to its size.

I stand corrected, but not discouraged!

I propose a new class of model, the "Unbiased-Architecture Instant Boltzmann Model" (UA-IBM).

One day we will have quantum computers large enough to simply set up the whole dataset as a classical constraint on a model defined with N serialized values, representing all the parameters and architecture settings. Then let a quantum system with N qubits take one inference step over all the classical samples, with all possible parameters and architectures in quantum superposition, and then reduce the result to return the best (or near best) model's parametesr and architecture in classical form.

Anyone have a few qubits laying around that want to give this a shot? (The irony that everything is quantum and yet so slippery we can hardly put any of it to work yet.

(Sci-fi story premise: the totally possible case of an alien species that evolved one-off quantum sensor, which evolved into a whole quantum sensory system, then a nervous system, and subsequently full quantum intelligence out of the gate. What kind of society and technological trajectory would they have? Hopefully they are in close orbit around a black hole, so the impact of their explosive progress has not threatened us yet. And then one day, they escape their gravity well, and ...)

[0] https://en.wikipedia.org/wiki/Boltzmann_brain

immibis•10mo ago
That isn't how quantum computers work.
Nevermark•10mo ago
My understanding, which is far from certain, is that problems like this (try a large combination of solutions, cooperate between superpositions to identify the best, then set up the quantum outputs so when they are classically sampled, the best answer is the most likely sample obtained) are solved in four stages:

1. N^2 potential solutions encoded in superposition across N qubuts.

2. Each superposition goes through the quantum version of a normal inference pass, using each superpositions weights to iteratively process all the classical data, then calculates performance. (This is all done without any classical sampling.)

3. Cross-superposition communication that results in agreement on the desired version. This is the weakest part of my knowledge. I know that is an operation type that exists, but I don't know how circuits implement it. But the number of bits being operated on is small, just a performance measure. (This is also done without any classical sampling.)

4. Then the output is sampled to get classical values. This requires N * log2(N) circuit complexity, where N is the number of bits defining the chosen solution, i.e. parameters and architectural settings. This can be a lot of hardware, obviously, perhaps more than the rest of the hardware, given N will be very large.

Don't take anything I say here for granted. I have designed parts of such a circuit, using ideal quantum gates in theory, but not all of it. I am not an expert, but I believe that every step here is well understood by others.

The downside relative to other approaches: It does a complete search of the entire solutions space directly, so the result comes from reducing N^2 superpositions across N qubits, where for interesting models the N can be very large (billions of parameters x parameter bit width), to get N qubits. No efficiencies are obtained from gradient information or any other heuristic. This is the brute force method. So an upper bound of hardware requirements.

Another disadvantage, is since the solution was not obtained by following a gradient, the best solution might not be robust. Meaning, it might be a kind of vertex between robust solutions that manages to do best on the designed data, but small changes of input samples from the design data might not generalize well. This is less likely to be a problem if smoothing/regularization information is taken into account account for determining each superpositions performance.

ithkuil•10mo ago
Poor quantum beings. They don't have access to a computation model that exceeds the speeds of their own thoughts and they are forever doomed to be waiting a long time for computations to happen
anomancer•10mo ago
Author here! Thanks for all the comments, didn't expect this to hit the front page.

Cleaning up the abundance of typos, margin, and scroll issues now, thanks for pointing them out.

anomancer•10mo ago
Have cleaned up the typos, and it should look much better on mobile now