frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: I'm an airline pilot – I built interactive graphs/globes of my flights

https://jameshard.ing/pilot
1060•jamesharding•13h ago•161 comments

Normalizing Flows Are Capable Generative Models

https://machinelearning.apple.com/research/normalizing-flows
84•danboarder•5h ago•6 comments

Learn OCaml – Exercises

https://ocaml-sf.org/learn-ocaml-public/#activity=exercises
67•smartmic•5h ago•19 comments

SymbolicAI: A neuro-symbolic perspective on LLMs

https://github.com/ExtensityAI/symbolicai
116•futurisold•7h ago•35 comments

James Webb Space Telescope Reveals Its First Direct Image of an Exoplanet

https://www.smithsonianmag.com/smart-news/james-webb-space-telescope-reveals-its-first-direct-image-discovery-of-an-exoplanet-180986886/
129•divbzero•8h ago•57 comments

Structuring Arrays with Algebraic Shapes

https://dl.acm.org/doi/abs/10.1145/3736112.3736141
64•todsacerdoti•6h ago•5 comments

Reinforcement learning, explained with a minimum of math and jargon

https://www.understandingai.org/p/reinforcement-learning-explained
48•JnBrymn•3d ago•1 comments

Qwen VLo: From "Understanding" the World to "Depicting" It

https://qwenlm.github.io/blog/qwen-vlo/
170•lnyan•11h ago•50 comments

Multi-Stage Programming with Splice Variables

https://tsung-ju.org/icfp25/
14•matt_d•2h ago•1 comments

C compiler for Web Assembly (c4wa)

https://github.com/kign/c4wa
7•90s_dev•3d ago•0 comments

10 Years of Pomological Watercolors

https://parkerhiggins.net/2025/04/10-years-of-pomological-watercolors/
170•fanf2•11h ago•28 comments

Facebook is starting to feed its AI with private, unpublished photos

https://www.theverge.com/meta/694685/meta-ai-camera-roll
58•pier25•2h ago•34 comments

bootc-image-builder: Build your entire OS from a Containerfile

https://github.com/osbuild/bootc-image-builder
29•twelvenmonkeys•3d ago•7 comments

nimbme – Nim bare-metal environment

https://github.com/mikra01/nimbme
44•michaelsbradley•7h ago•9 comments

Transmitting data via ultrasound without any special equipment

https://halcy.de/blog/2025/06/27/transmitting-data-via-ultrasound-without-any-special-equipment/
96•todsacerdoti•9h ago•30 comments

Theoretical Analysis of Positional Encodings in Transformer Models

https://arxiv.org/abs/2506.06398
15•PaulHoule•4h ago•1 comments

Spark AI (YC W24) is hiring a full-stack engineer in SF (founding team)

https://www.ycombinator.com/companies/spark/jobs/kDeJlPK-software-engineer-full-stack-founding-team
1•juliawu•5h ago

Rust in the Linux kernel: part 2

https://lwn.net/SubscriberLink/1025232/fbb2d90d084368e3/
68•chmaynard•4h ago•1 comments

A Brief History of Children Sent Through the Mail

https://www.smithsonianmag.com/smart-news/brief-history-children-sent-through-mail-180959372/
84•m-hodges•6h ago•71 comments

New Process Uses Microbes to Create Valuable Materials from Urine

https://newscenter.lbl.gov/2025/06/17/new-process-uses-microbes-to-create-valuable-materials-from-urine/
24•gmays•7h ago•5 comments

Weird Expressions in Rust

https://www.wakunguma.com/blog/rust-weird-expr
141•lukastyrychtr•11h ago•111 comments

The Journey of Bypassing Ubuntu's Unprivileged Namespace Restriction

https://u1f383.github.io/linux/2025/06/26/the-journey-of-bypassing-ubuntus-unprivileged-namespace-restriction.html
12•Bogdanp•5h ago•1 comments

Whitesmiths C compiler: One of the earliest commercial C compilers available

https://github.com/hansake/Whitesmiths-C-compiler
96•todsacerdoti•4d ago•31 comments

Does a Focus on Royalty Obscure British History?

https://www.historytoday.com/archive/head-head/does-focus-royalty-obscure-british-history
16•pepys•3d ago•5 comments

Glass nanostructures reflect nearly all visible light, challenging assumptions

https://phys.org/news/2025-06-glass-nanostructures-visible-photonics-assumptions.html
26•bookofjoe•3d ago•4 comments

A New Kind of Computer (April 2025)

https://lightmatter.co/blog/a-new-kind-of-computer/
40•gkolli•3d ago•17 comments

Parameterized types in C using the new tag compatibility rule

https://nullprogram.com/blog/2025/06/26/
130•ingve•21h ago•64 comments

Slightly better named character reference tokenization than Chrome, Safari, FF

https://www.ryanliptak.com/blog/better-named-character-reference-tokenization/
47•todsacerdoti•1d ago•8 comments

PJ5 TTL CPU

https://pj5cpu.wordpress.com/
81•doener•19h ago•1 comments

Project Vend: Can Claude run a small shop? (And why does that matter?)

https://www.anthropic.com/research/project-vend-1
199•gk1•10h ago•82 comments
Open in hackernews

Normalizing Flows Are Capable Generative Models

https://machinelearning.apple.com/research/normalizing-flows
84•danboarder•5h ago

Comments

layer8•4h ago
Earlier discussion: https://news.ycombinator.com/item?id=44358535
jc4p•3h ago
i've been trying to keep up with this field (image generation) so here's quick notes I took:

Claude's Summary: "Normalizing flows aren't dead, they just needed modern techniques"

My Summary: "Transformers aren't just for text"

1. SOTA model for likelihood on ImageNet 64×64, first ever sub 3.2 (Bits Per Dimension) prev was 2.99 by a hybrid diffusion model

2. Autoregressive (transformers) approach, right now diffusion is the most popular in this space (it's much faster but a diff approach)

tl;dr of autoregressive vs diffusion (there's also other approaches)

Autoregression: step based, generate a little then more then more

Diffusion: generate a lot of noise then try to clean it up

The diffusion approach that is the baseline for sota is Flow Matching from Meta: https://arxiv.org/abs/2210.02747 -- lots of fun reading material if you throw both of these into an LLM and ask it to summarize the approaches!

godelski•2h ago
You have a few minor errors and I hope I can help out.

  > Diffusion: generate a lot of noise then try to clean it up
You could say this about Flows too. The history of them is shared with diffusion and goes back to the Whitening Transform. Flows work by a coordinate transform so we have an isomorphism where diffusion works through, for easier understanding, a hierarchical mixture of gaussians. Which is a lossy process (more confusing when we get into latent diffusion models, which are the primary type used). The goal of a Normalizing Flow is to turn your sampling distribution, which you don't have an explicit representation of, into a probability distribution (typically Normal Noise/Gaussian). So in effect, there are a lot of similarities here. I'd highly suggest learning about Flows if you want to better understand Diffusion Models.

  > The diffusion approach that is the baseline for sota is Flow Matching from Meta
To be clear, Flow Matching is a Normalizing Flow. Specifically, it is a Continuous and Conditional Normalizing Flow. If you want to get into the nitty gritty, Ricky has a really good tutorial on the stuff[0]

[0] https://arxiv.org/abs/2412.06264

jc4p•1h ago
thank you so much!!! i should’ve put that final sentence in my post!
godelski•1h ago
Happy to help and if you have any questions just ask, this is my jam
godelski•2h ago
As far as I'm aware, this is the largest Normalizing Flow that exists, and I think they undermined their work by not mentioning this...

Their ImageNet model (4_1024_8_8_0.05[0]) is ~820M while AFHQ is ~472M. Prior to that there is DenseFlow[1] and MaCow[2], which are both <200M parameters. For more comparison, that makes DenseFlow and MaCow smaller than iDDPM[3] (270M params) and ADM[4] (553M for 256 unconditional). And now, it isn't uncommon for modern diffusion models to have several billion parameters![5] (from this we get some numbers on ImageNet-256, which allows a direct comparison, making TarFlow closer to MaskDiT/2 and much smaller than SimpleDiffusion and VDM++, both of which are in billions. But note that this is 128 vs 256!)

Essentially, the argument here is that you can scale (Composable) Normalizing Flows just as well as diffusion models. There's a lot of extra benefits you get too in the latent space, but that's a much longer discussion. Honestly, the TarFlow method is simple and there's probably a lot of improvements that can be made. But don't take that as a knock on this paper! I actually really appreciated it and it really set out to show what they tried to show. The real thing is just no one trained flows at this scale before and this really needs to be highlighted.

The tldr: people have really just overlooked different model architectures

[0] Used a third party reproduction so might be different but their AFHQ-256 model matches at 472M params https://github.com/encoreus/GS-Jacobi_for_TarFlow

[1] https://arxiv.org/abs/2106.04627

[2] https://arxiv.org/abs/1902.04208

[3] https://arxiv.org/abs/2102.09672

[4] https://arxiv.org/abs/2105.05233

[5] https://arxiv.org/abs/2401.11605

[Side note] Hey, if the TarFlow team is hiring, I'd love to work with you guys