frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Earth is warming faster. Scientists are closing in on why

https://www.economist.com/science-and-technology/2024/12/16/earth-is-warming-faster-scientists-ar...
87•andsoitis•1h ago•59 comments

ASCII characters are not pixels: a deep dive into ASCII rendering

https://alexharri.com/blog/ascii-rendering
524•alexharri•7h ago•65 comments

We Put Claude Code in Rollercoaster Tycoon

https://labs.ramp.com/rct
182•iamwil•5d ago•95 comments

Why There's No Single Best Way to Store Information

https://www.quantamagazine.org/why-theres-no-single-best-way-to-store-information-20260116/
30•7777777phil•2h ago•9 comments

Show HN: What if your menu bar was a keyboard-controlled command center?

https://extrabar.app/
27•pugdogdev•1h ago•13 comments

Counterfactual evaluation for recommendation systems

https://eugeneyan.com/writing/counterfactual-evaluation/
27•kurinikku•13h ago•0 comments

2025 was the third hottest year on record

https://www.economist.com/science-and-technology/2026/01/14/2025-was-the-third-hottest-year-on-re...
108•andsoitis•1h ago•72 comments

M8SBC-486 (Homebrew 486 computer)

https://maniek86.xyz/projects/m8sbc_486.php
21•rasz•6d ago•3 comments

The 600-year-old origins of the word 'hello'

https://www.bbc.com/culture/article/20260113-hello-hiya-aloha-what-our-greetings-reveal
75•1659447091•7h ago•43 comments

The Dilbert Afterlife

https://www.astralcodexten.com/p/the-dilbert-afterlife
324•rendall•1d ago•212 comments

East Germany balloon escape

https://en.wikipedia.org/wiki/East_Germany_balloon_escape
631•robertvc•1d ago•267 comments

ClickHouse acquires Langfuse

https://langfuse.com/blog/joining-clickhouse
164•tin7in•9h ago•71 comments

Map To Poster – Create Art of your favourite city

https://github.com/originalankur/maptoposter
157•originalankur•8h ago•49 comments

Show HN: Streaming gigabyte medical images from S3 without downloading them

https://github.com/PABannier/WSIStreamer
108•el_pa_b•10h ago•40 comments

16 Best Practices for Reducing Dependabot Noise

https://nesbitt.io/2026/01/10/16-best-practices-for-reducing-dependabot-noise.html
16•zdw•5d ago•11 comments

Cloudflare acquires Astro

https://astro.build/blog/joining-cloudflare/
909•todotask2•1d ago•379 comments

The Resonant Computing Manifesto

https://resonantcomputing.org/
26•sinak•2h ago•7 comments

US electricity demand surged in 2025 – solar handled 61% of it

https://electrek.co/2026/01/16/us-electricity-demand-surged-in-2025-solar-handled-61-percent/
275•doener•8h ago•251 comments

The 'untouchable hacker god' behind Finland's biggest crime

https://www.theguardian.com/technology/2026/jan/17/vastaamo-hack-finland-therapy-notes
116•c420•11h ago•115 comments

The Olivetti Company – By Bradford Morgan White

https://www.abortretry.fail/p/the-olivetti-company
8•rbanffy•6d ago•3 comments

Cursor's latest “browser experiment” implied success without evidence

https://embedding-shapes.github.io/cursor-implied-success-without-evidence/
661•embedding-shape•1d ago•288 comments

High-Level Is the Goal

https://bvisness.me/high-level/
215•tobr•2d ago•104 comments

6-Day and IP Address Certificates Are Generally Available

https://letsencrypt.org/2026/01/15/6day-and-ip-general-availability
462•jaas•1d ago•256 comments

Italy investigates Activision Blizzard for pushing in-game purchases

https://techcrunch.com/2026/01/16/italy-investigates-activision-blizzard-for-pushing-in-game-purc...
80•7777777phil•5h ago•31 comments

FLUX.2 [Klein]: Towards Interactive Visual Intelligence

https://bfl.ai/blog/flux2-klein-towards-interactive-visual-intelligence
199•GaggiX•19h ago•55 comments

PCs refuse to shut down after Microsoft patch

https://www.theregister.com/2026/01/16/patch_tuesday_secure_launch_bug_no_shutdown/
176•smurda•8h ago•196 comments

Show HN: I built a tool to assist AI agents to know when a PR is good to go

https://dsifry.github.io/goodtogo/
11•dsifry•9h ago•9 comments

LLM Structured Outputs Handbook

https://nanonets.com/cookbooks/structured-llm-outputs
333•vitaelabitur•2d ago•58 comments

For me, Hacker News is probably the best community on the internet

16•DenisDolya•1h ago•10 comments

Drone Hacking Part 1: Dumping Firmware and Bruteforcing ECC

https://neodyme.io/en/blog/drone_hacking_part_1/
118•tripdout•16h ago•23 comments
Open in hackernews

Diffusion models explained simply

https://www.seangoedecke.com/diffusion-models-explained/
168•onnnon•8mo ago

Comments

user14159265•8mo ago
https://lilianweng.github.io/posts/2021-07-11-diffusion-mode...
Philpax•8mo ago
Notably, Lilian did not explain diffusion models simply. This is a fantastic resource that details how they actually work, but your casual reader is unlikely to develop any sort of understanding from this.
Y_Y•8mo ago
> your casual reader is unlikely to develop any sort of understanding [from this]

"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize." - Richard Feynman

CamperBob2•8mo ago
Didn't he also say that if you couldn't explain something to an 8-year-old, you didn't understand it yourself?
Y_Y•8mo ago
Fair point. The context of that quote was that he was asked by a journalist for a quick explanation over the phone when the physics Nobel for 1965 was announced.

He did go on to write a very readable little book (from a lecture series) on the subject which has photons wearing little watches and waiting for the hands to line up. I'd say a keen eight-year-old could get something from that.

https://ia600101.us.archive.org/17/items/richard-feynman-pdf...

kmitz•8mo ago
Thanks, I was looking for an article like this, with a focus on the differences between generative AI techniques. My guess is that since LLMs and image generation became mainstream at the same time, most people don't have the slightest idea they are based on fundamentally different technologies.
cubefox•8mo ago
That's a nice high-level explanation: short and easy to understand.
cubefox•8mo ago
It's nice that this contains a comparison between diffusion models that are used for image models, and the autoregressive models that are used for LLMs.

But recently (2024 NeuIPS paper of the year) there was a new paper on autoregressive image modelling that apparently outperforms diffusion models: https://arxiv.org/abs/2404.02905

The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.

In the past, autoregressive image models did not perform as well as diffusion models, which meant that most image models used diffusion. Now it seems autoregressive techniques have a strict advantage over diffusion models. Another advantage is that they can be integrated with autoregressive LLMs (multimodality), which is not possible with diffusion image models. In fact, the recent GPT-4o image generation is autoregressive according to OpenAI. I wonder whether diffusion models still have a future now.

earthnail•8mo ago
From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

I'm not 100% convinced that diffusion models are dead. That paper fixes autoregression for 2D spaces by basically turning the generation problem from pixel-by-pixel to iterative upsampling, but if 2D was the problem (and 1D was not), why don't we have more autoregressive models in 1D spaces like audio?

famouswaffles•8mo ago
>From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

You could, because it's still autoregressive. It still generates patches left to right, top to bottom. It's just that we're not starting with patches at the target resolution.

cubefox•8mo ago
> From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited.

Which means autoregressive image models are even ahead of diffusion on multiple fronts, i.e. both in whatever GPT-4o is doing and in the method described in the VAR paper.

rudedogg•7mo ago
> From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

Going off my bad memory, but I think I remember a comment saying the line-by-line generation was just a visual effect.

famouswaffles•8mo ago
>The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.

It still predicts image patches, left to right and top to bottom. The main difference is that you start with patches at a low resolution.

porphyra•8mo ago
Meanwhile, if you want diffusion models explained with math for a graduate student, there's Tony Duan's Diffusion Models From Scratch.

[1] https://www.tonyduan.com/diffusion/index.html

bcherry•8mo ago
"The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material."

- Michelangelo

jdthedisciple•8mo ago
Not to be that guy but an article on diffusion models with only one image ... and that too just noise?
ActorNightly•8mo ago
The thing to understand about any model architecture is that there isn't really anything special about one or the other - as long as the process differentiable, ML can learn it.

You can build an image generator that basically renders each word on one line in an image, and then uses a transformer architecture to morph the image of the words into what the words are describing.

They only big difference is really efficiency, but we are just taking stabs at the dark at this point - there is work that Google is doing that eventually is going to result in the most optimal model for a certain type of task.

noosphr•8mo ago
Without going into too much detail: the complexity space of tensor operations is for all practical purposes infinite. The general tensor which captures all interactions between all elements of an input of length N is NxN.

This is worse than exponential and means we have nothing but tricks to try and solve any problem that we see in reality.

As an example solving mnist and its variants of 28x28 pixels will be impossible until the 2100s because we don't have enough memory to store the general tensor which stores the interactions between group of pixels with every other group pixels.

joefourier•8mo ago
While true in a theoretical sense (an MLP of sufficient size can theoretically represent any differentiable function), in practice it’s often the case that it’s impossible for a certain architecture to learn a specific task no matter how much compute you throw at it. E.g. an LSTM will never capture long range dependencies that a transformer could trivially learn, due to gradients vanishing after a certain sequence length.
ActorNightly•8mo ago
You are right with respect to ordering of operations, where recurrent networks have a whole bunch of other computational complexity to them.

However, for example, a Transformer can be represented with just deeply connected layers, albeit with a lot of zeros for weights.

g42gregory•8mo ago
One of the key intuitions: If you take a natural image and add random noise, you will get a different random noise image every time you do this. However, all of these (different!) random noise images will be lined up in the direction perpendicular to the natural images manifold.

So you will always know where to go to restore the original image: shortest distance to the natural image manifold.

How all these random images end up perpendicular to the manifold? High dimensional statistics and the fact that the natural image manifold has much lower dimension than the overall space.

yubblegum•8mo ago
TIL.

Generative Visual Manipulation on the Natural Image Manifold

https://arxiv.org/abs/1609.03552

For me, the most intriguing aspect of LLMs (and friends) are the embedding space and the geometry of the embedded manifolds. Curious if anyone has looked into comparative analysis of the geometry of the manifolds corresponding to distinct languages. Intuitively I see translations as a mapping from one language manifold to another, with expressions being paths on that manifold, which makes me wonder if there is a universal narrative language manifold that captures 'human expression semantics' in the same way as a "natural image manifold".

Ey7NFZ3P0nzAe•7mo ago
I think this is related: https://news.ycombinator.com/item?id=44054425
fisian•8mo ago
I found this course very helpful if you're interested in a bit of math (but all very well explained): https://diffusion.csail.mit.edu/

It is short, with good lecture notes and has hands on examples that are very approachable (with solutions available if you get stuck).

woolion•8mo ago
Discussed on hn: https://news.ycombinator.com/item?id=43238893

I found it to be the best resource to understand the material. That's certainly a good reference to delve deeper into the intuitions given by OP (it's about 5 hours of lectures, plus exercises).

IncreasePosts•8mo ago
Are there any diffusion models for text? I'd imagine they'd be very fast, if the whole result can be processed simultaneously, instead of outputting a linear series of tokens that each depend on the last
imbnwa•8mo ago
Need a text diffusion model to output a version of Eden!Eden!Eden!
woadwarrior01•8mo ago
Diffusion for text is a nascent field. There are a few pretrained models. Here's one[1], AFAIK it's currently the largest open weights text diffusion model.

[1]: https://ml-gsai.github.io/LLaDA-demo/

intalentive•8mo ago
This explanation is intuitive: https://www.youtube.com/watch?v=zc5NTeJbk-k

My takeaway is that diffusion "samples all the tokens at once", incrementally, rather than getting locked in to a particular path, as in auto-regression, which can only look backward. The upside is global context, the downside is fixed-size output.

orbital-decay•8mo ago
That a not a good intuition to have. That backwards-looking pathfinding process is actually pretty similar in both types of models - it just works along a different coordinate, crude-to-detailed instead of start-to-end.
intalentive•7mo ago
Good point.
petermcneeley•8mo ago
This page is full of text. I am guessing the author (Sean Goedecke) is a language based thinker.
JoeDaDude•8mo ago
Coincidentally, I was just watching this explanation earlier today:

How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile

https://www.youtube.com/watch?v=1CIpzeNxIhU

bicepjai•8mo ago
>>>CLASSIFIER-FREE GUIDANCE … During inference, you run once with a caption and once without, and blend the predictions (magnifying the difference between those two vectors). That makes sure the model is paying a lot of attention to the caption.

Why is this sentence true ? “That makes sure the model is paying a lot of attention to the caption.”

noodletheworld•8mo ago
Mmm… how is a model with a fixed size, let’s say, 512x512 (ie. 64x64 latent or whatever), able to output coherent images at a larger size, let’s say, 1024x1024?

Not in a “kind of like this” kind of way: PyTorch vector pipelines can’t take arbitrary sized inputs at runtime right?

If you input has shape [x, y, z] you cannot pass [2x, 2y, 2z] into it.

Not… “it works but not very well”; like, it cannot execute the pipeline if the input dimensions aren’t exactly what they were when training.

Right? Isn’t that how it works?

So, is the image chunked into fixed patches and fed through in parts? Or something else?

For example, (1) this toy implementation resizes the input image to match the expected input, and always emits an output of a specific fixed size.

Which is what you would expect; but also, points to tools like stable diffusion working in a way that is distinctly different to what the trivial explanation tend to say does?

[1] - https://github.com/uygarkurt/UNet-PyTorch/blob/main/inferenc...

swyx•7mo ago
> That last point indicates an interesting capability that diffusion models have: you get a kind of built-in quality knob. If you want fast inference at the cost of quality, you can just run the model for less time and end up with more noise in the final output2. If you want high quality and you’re happy to take your time getting there, you can keep running the model until it’s finished removing noise.

not quite right... anyone who has run models for >100 steps knows that you can go too far. whts the explanation of that?