frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•44s ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•4m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•4m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•5m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•5m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•7m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•9m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•9m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•14m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•15m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•16m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
1•Brajeshwar•16m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•18m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•18m ago•1 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
7•c420•18m ago•1 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•19m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
3•HotGarbage•19m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•19m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•21m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
4•surprisetalk•24m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
3•TheCraiggers•25m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•26m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
12•doener•26m ago•2 comments

MyFlames: View MySQL execution plans as interactive FlameGraphs and BarCharts

https://github.com/vgrippa/myflames
1•tanelpoder•28m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•28m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•29m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•30m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
4•elsewhen•33m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•34m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•38m ago•0 comments
Open in hackernews

UGMM-NN: Univariate Gaussian Mixture Model Neural Network

https://arxiv.org/abs/2509.07569
31•zakeria•4mo ago

Comments

zakeria•4mo ago
uGMM-NN is a novel neural architecture that embeds probabilistic reasoning directly into the computational units of deep networks. Unlike traditional neurons, which apply weighted sums followed by fixed nonlinearities, each uGMM-NN node parameterizes its activations as a univariate Gaussian mixture, with learnable means, variances, and mixing coefficients.
vessenes•4mo ago
Meh. Well, at least, possibly “meh”.

Upshot: Gaussian sampling along the parameters of nodes rather than a fixed number. This might offer one of the following:

* Better inference time accuracy on average

* Faster convergence during training

It probably costs additional inference and training compute.

The paper demonstrates worse results on MNIST, and shows the architecture is more than capable of dealing with the Iris test (which I hadn’t heard of; categorizing types of irises, I presume the flower, but maybe the eye?)

The paper claims to keep the number of parameters and depth the same, but it doesn’t report as to

* training time/flops (probably more I’d guess?)

* inference time/flops (almost certainly more)

Intuitively if you’ve got a mean, variance and mix coefficient, then you have triple the data space per parameter — no word as to whether the networks were normalized as to total data taken by the NN or just the number of “parameters”.

Upshot - I don’t think this paper demonstrates any sort of benefit here or elucidates the tradeoffs.

Quick reminder, negative results are good, too. I’d almost rather see the paper framed that way.

zakeria•4mo ago
Thanks for the comment. Just to clarify, the uGMM-NN isn't simply "Gaussian sampling along the parameters of nodes."

Each neuron is a univariate Gaussian mixture with learnable mean, variance, and mixture weights. This gives the network the ability to perform probabilistic inference natively inside its architecture, rather than approximating uncertainty after the fact.

The work isn’t framed as "replacing MLPs." The motivation is to bridge two research traditions:

- probabilistic graphical models and probabilistic circuits (relatively newer)

- deep learning architectures

That's why the Iris dataset (despite being simple) was included - not as a discriminative benchmark, but to show the model could be trained generatively in a way similar to PGMs, something a standard MLP cannot do. Hence, the other benefits of the approach mentioned in the paper.

vessenes•4mo ago
Thanks for writing back! I appreciate the plan to integrate the two architectures. On that front, it might be interesting to have a future research section - like what would be uniquely good about this architecture if scaled up?

On ‘usefulness’ I think I’m still at my original question - it seems like an open theoretical q to say that the combination of a tripled-or-greater training budget, data size budget of the NN, and probably a close to triple or greater inference budget, the costs of the architecture you described, cannot be closely approximated by the “fair equivalent”-ly sized MLP.

I hear you that the architecture can do more, but can you talk about this fair size question I have? That is, if a PGM of the same size as your original network in terms of weights and depth is as effective, then we’d still have a space savings to just have the two networks (MLP and PGM) side by side.

Thanks again for publishing!

zakeria•4mo ago
That’s a fair question. You’re right that on paper a uGMM neuron looks like it “costs” ~3× an MLP weight. But there are levers to balance that. For example, the paper discusses parameter tying, where the Gaussian component means are tied directly to the input activations. In that setup, each neuron only learns the mixture weights and variances, which cuts parameters significantly while still preserving probabilistic inference. The tradeoff may be reduced expressiveness, but it shows the model doesn’t have to be 3x heavier.

More broadly: traditional graphical models were largely intractable at deep learning scale until probabilistic circuits, which introduced tractable probabilistic semantics without exploding parameter counts. Circuits do this by constraining model structure. uGMM-NN sits differently: it brings probabilistic reasoning inside dense architectures.

So while compute cost is real, the “fair comparison” isn’t just params-per-weight, it’s also about what kinds of inference the model can do at all, and the added interpretability of mixture-based neurons, which traditional MLP neurons don’t provide - it shares some spirit with recent work like KAN, but tackles the problem through probabilistic modeling rather than spline-based function fitting.

ericdoerheit•4mo ago
Thank you for your work! I would be interested to see what this means to a CNN architecture. Maybe it wouldn't actually be needed to have the whole architecture based on uGMM-NNs but only the last layers?
zakeria•4mo ago
Thanks - good question, in theory, the uGMM layer could complement CNNs in different ways - for example, one could imagine (as you mentioned):

using standard convolutional layers for feature extraction,

then replacing the final dense layers with uGMM neurons to enable probabilistic inference and uncertainty modeling on top of the learned features.

My current focus, however, is exploring how uGMMs translate into Transformer architectures, which could open up interesting possibilities for probabilistic reasoning in attention-based models.

magicalhippo•4mo ago
I'm having a very dense moment I think, and it's been far to long since the statistics courses.

They state the output of a neuron j is a log density P_j(y), where y is a latent variable.

But how does the output from the previous layer, x, come into play?

I guess I was expecting some kind of conditional probabilities, ie the output is P_j given x or something.

Again, perhaps trivial. Just struggling to figure out how it works in practice.

magicalhippo•4mo ago
Clearly I wasn't in neural net mode. I take it then the learned parameters, the means, variances and mixing coefficients, are effectively functions of the output of the previous layer.
zakeria•4mo ago
Thanks - That's correct, the Gaussian mixture parameters (mu, sigma, pi) are learned as functions of the input from the previous layer. So it’s still a feedforward net: the activations from layer x determine the mixture parameters for the next layer.

The reason the neuron’s output is written as a log-density Pj(y) is just to emphasize the probabilistic view: each neuron is modeling how likely a latent variable y would be under its mixture distribution.

yobbo•4mo ago
It's not clear from the formulas how x=[x1,...,xN] relates to y, μ, and σ since these are defined without x. Assuming y = Wx + b, and μ, σ, and π are learnable parameters for each output dimension. The symbol π seems to mean both weight and the constant 3.14159 in the same formula.

Overall it looks similar to radial basis activations, but the activations look to be log of weighted "stochastic" sums (weights sum to one) of a set of radial basis functions.

The biggest difference is probably log outputs.

zakeria•4mo ago
Thanks for the comment - uGMM neurons are not just "RBFs with log outputs". Each neuron is a mixture of Gaussians with trainable means, variances, and mixture weights, encoding uncertainty and multimodality that propagates through the network. The log-likelihood output is simply a consequence of this probabilistic formulation, not an innovation.