frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Where the goblins came from

https://openai.com/index/where-the-goblins-came-from/
352•ilreb•3h ago•159 comments

Craig Venter has died

https://www.jcvi.org/media-center/j-craig-venter-genomics-pioneer-and-founder-jcvi-and-diploid-ge...
184•rdl•4h ago•33 comments

Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs

https://github.com/cauchy221/Alignment-Whack-a-Mole-Code
83•reconnecting•3h ago•45 comments

Zed 1.0

https://zed.dev/blog/zed-1-0
1714•salkahfi•15h ago•554 comments

Noctua releases official 3D CAD models for its cooling fans

https://www.noctua.at/en/3d-cad-models
83•embedding-shape•2d ago•16 comments

The Zig project's rationale for their firm anti-AI contribution policy

https://simonwillison.net/2026/Apr/30/zig-anti-ai/
119•lumpa•4h ago•41 comments

Functional programmers need to take a look at Zig

https://pure-systems.org/posts/2026-04-29-functional-programmers-need-to-take-a-look-at-zig.html
56•xngbuilds•3h ago•39 comments

Copy Fail

https://copy.fail/
814•unsnap_biceps•12h ago•315 comments

Biology is a Burrito: A text- and visual-based journey through a living cell

https://burrito.bio/essays/biology-is-a-burrito
55•the-mitr•2h ago•8 comments

Cursor Camp

https://neal.fun/cursor-camp/
820•bpierre•14h ago•135 comments

London to Calcutta by Bus (2022)

https://www.amusingplanet.com/2022/08/london-to-calcutta-by-bus.html
28•CGMthrowaway•1d ago•6 comments

FastCGI: 30 years old and still the better protocol for reverse proxies

https://www.agwa.name/blog/post/fastcgi_is_the_better_protocol_for_reverse_proxies
308•agwa•14h ago•72 comments

OpenTrafficMap

https://opentrafficmap.org/
203•moooo99•10h ago•46 comments

Mike: open-source legal AI

https://mikeoss.com/
62•noleary•5h ago•18 comments

Monad Tutorials Timeline

https://wiki.haskell.org/Monad_tutorials_timeline
8•brudgers•1h ago•1 comments

HERMES.md in commit messages causes requests to route to extra usage billing

https://github.com/anthropics/claude-code/issues/53262
1072•homebrewer•11h ago•457 comments

Joby kicks off NYC electric air taxi demos with historic JFK flight

https://www.flyingmag.com/joby-nyc-electric-air-taxi-jfk-airport/
39•Jblx2•5h ago•87 comments

Creating a Color Palette from an Image

https://amandahinton.com/blog/creating-a-color-palette-from-an-image
49•evakhoury•1d ago•6 comments

Consequences of passing too few register parameters to a C function

https://devblogs.microsoft.com/oldnewthing/20260427-00/?p=112271
46•aragonite•2d ago•21 comments

Why I still reach for Lisp and Scheme instead of Haskell

https://jointhefreeworld.org/blog/articles/lisps/why-i-still-reach-for-scheme-instead-of-haskell/...
201•jjba23•21h ago•95 comments

Who Is That Knocking at My (SSH) Door?

https://sheep.horse/2026/4/who_is_that_knocking_at_my_%28ssh%29_door.html
11•speckx•2d ago•0 comments

Laws of UX

https://lawsofux.com/
234•bobbiechen•13h ago•35 comments

Gooseworks (YC W23) Is Hiring a Founding Growth Engineer

https://www.ycombinator.com/companies/gooseworks/jobs/ztgY6bD-founding-growth-engineer
1•shivsak•8h ago

An open-source stethoscope that costs between $2.5 and $5 to produce

https://github.com/GliaX/Stethoscope
237•0x54MUR41•15h ago•101 comments

A grounded conceptual model for ownership types in Rust

https://cacm.acm.org/research-highlights/a-grounded-conceptual-model-for-ownership-types-in-rust/
25•tkhattra•4h ago•1 comments

Vera: a programming language designed for machines to write

https://github.com/aallan/vera
77•unignorant•8h ago•61 comments

We need a federation of forges

https://blog.tangled.org/federation/
550•icy•16h ago•341 comments

DRAM Crunch: Lessons for System Design

https://www.eetimes.com/what-the-dram-crunch-teaches-us-about-system-design/
50•giuliomagnifico•1d ago•3 comments

How to Build the Future: Demis Hassabis [video]

https://www.youtube.com/watch?v=JNyuX1zoOgU
105•sandslash•16h ago•51 comments

Ramp's Sheets AI Exfiltrates Financials

https://www.promptarmor.com/resources/ramps-sheets-ai-exfiltrates-financials
126•takira•12h ago•39 comments
Open in hackernews

Diffusion models explained simply

https://www.seangoedecke.com/diffusion-models-explained/
168•onnnon•11mo ago

Comments

user14159265•11mo ago
https://lilianweng.github.io/posts/2021-07-11-diffusion-mode...
Philpax•11mo ago
Notably, Lilian did not explain diffusion models simply. This is a fantastic resource that details how they actually work, but your casual reader is unlikely to develop any sort of understanding from this.
Y_Y•11mo ago
> your casual reader is unlikely to develop any sort of understanding [from this]

"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize." - Richard Feynman

CamperBob2•11mo ago
Didn't he also say that if you couldn't explain something to an 8-year-old, you didn't understand it yourself?
Y_Y•11mo ago
Fair point. The context of that quote was that he was asked by a journalist for a quick explanation over the phone when the physics Nobel for 1965 was announced.

He did go on to write a very readable little book (from a lecture series) on the subject which has photons wearing little watches and waiting for the hands to line up. I'd say a keen eight-year-old could get something from that.

https://ia600101.us.archive.org/17/items/richard-feynman-pdf...

kmitz•11mo ago
Thanks, I was looking for an article like this, with a focus on the differences between generative AI techniques. My guess is that since LLMs and image generation became mainstream at the same time, most people don't have the slightest idea they are based on fundamentally different technologies.
cubefox•11mo ago
That's a nice high-level explanation: short and easy to understand.
cubefox•11mo ago
It's nice that this contains a comparison between diffusion models that are used for image models, and the autoregressive models that are used for LLMs.

But recently (2024 NeuIPS paper of the year) there was a new paper on autoregressive image modelling that apparently outperforms diffusion models: https://arxiv.org/abs/2404.02905

The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.

In the past, autoregressive image models did not perform as well as diffusion models, which meant that most image models used diffusion. Now it seems autoregressive techniques have a strict advantage over diffusion models. Another advantage is that they can be integrated with autoregressive LLMs (multimodality), which is not possible with diffusion image models. In fact, the recent GPT-4o image generation is autoregressive according to OpenAI. I wonder whether diffusion models still have a future now.

earthnail•11mo ago
From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

I'm not 100% convinced that diffusion models are dead. That paper fixes autoregression for 2D spaces by basically turning the generation problem from pixel-by-pixel to iterative upsampling, but if 2D was the problem (and 1D was not), why don't we have more autoregressive models in 1D spaces like audio?

famouswaffles•11mo ago
>From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

You could, because it's still autoregressive. It still generates patches left to right, top to bottom. It's just that we're not starting with patches at the target resolution.

cubefox•11mo ago
> From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited.

Which means autoregressive image models are even ahead of diffusion on multiple fronts, i.e. both in whatever GPT-4o is doing and in the method described in the VAR paper.

rudedogg•11mo ago
> From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

Going off my bad memory, but I think I remember a comment saying the line-by-line generation was just a visual effect.

famouswaffles•11mo ago
>The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.

It still predicts image patches, left to right and top to bottom. The main difference is that you start with patches at a low resolution.

porphyra•11mo ago
Meanwhile, if you want diffusion models explained with math for a graduate student, there's Tony Duan's Diffusion Models From Scratch.

[1] https://www.tonyduan.com/diffusion/index.html

bcherry•11mo ago
"The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material."

- Michelangelo

jdthedisciple•11mo ago
Not to be that guy but an article on diffusion models with only one image ... and that too just noise?
ActorNightly•11mo ago
The thing to understand about any model architecture is that there isn't really anything special about one or the other - as long as the process differentiable, ML can learn it.

You can build an image generator that basically renders each word on one line in an image, and then uses a transformer architecture to morph the image of the words into what the words are describing.

They only big difference is really efficiency, but we are just taking stabs at the dark at this point - there is work that Google is doing that eventually is going to result in the most optimal model for a certain type of task.

noosphr•11mo ago
Without going into too much detail: the complexity space of tensor operations is for all practical purposes infinite. The general tensor which captures all interactions between all elements of an input of length N is NxN.

This is worse than exponential and means we have nothing but tricks to try and solve any problem that we see in reality.

As an example solving mnist and its variants of 28x28 pixels will be impossible until the 2100s because we don't have enough memory to store the general tensor which stores the interactions between group of pixels with every other group pixels.

joefourier•11mo ago
While true in a theoretical sense (an MLP of sufficient size can theoretically represent any differentiable function), in practice it’s often the case that it’s impossible for a certain architecture to learn a specific task no matter how much compute you throw at it. E.g. an LSTM will never capture long range dependencies that a transformer could trivially learn, due to gradients vanishing after a certain sequence length.
ActorNightly•11mo ago
You are right with respect to ordering of operations, where recurrent networks have a whole bunch of other computational complexity to them.

However, for example, a Transformer can be represented with just deeply connected layers, albeit with a lot of zeros for weights.

g42gregory•11mo ago
One of the key intuitions: If you take a natural image and add random noise, you will get a different random noise image every time you do this. However, all of these (different!) random noise images will be lined up in the direction perpendicular to the natural images manifold.

So you will always know where to go to restore the original image: shortest distance to the natural image manifold.

How all these random images end up perpendicular to the manifold? High dimensional statistics and the fact that the natural image manifold has much lower dimension than the overall space.

yubblegum•11mo ago
TIL.

Generative Visual Manipulation on the Natural Image Manifold

https://arxiv.org/abs/1609.03552

For me, the most intriguing aspect of LLMs (and friends) are the embedding space and the geometry of the embedded manifolds. Curious if anyone has looked into comparative analysis of the geometry of the manifolds corresponding to distinct languages. Intuitively I see translations as a mapping from one language manifold to another, with expressions being paths on that manifold, which makes me wonder if there is a universal narrative language manifold that captures 'human expression semantics' in the same way as a "natural image manifold".

Ey7NFZ3P0nzAe•11mo ago
I think this is related: https://news.ycombinator.com/item?id=44054425
fisian•11mo ago
I found this course very helpful if you're interested in a bit of math (but all very well explained): https://diffusion.csail.mit.edu/

It is short, with good lecture notes and has hands on examples that are very approachable (with solutions available if you get stuck).

woolion•11mo ago
Discussed on hn: https://news.ycombinator.com/item?id=43238893

I found it to be the best resource to understand the material. That's certainly a good reference to delve deeper into the intuitions given by OP (it's about 5 hours of lectures, plus exercises).

IncreasePosts•11mo ago
Are there any diffusion models for text? I'd imagine they'd be very fast, if the whole result can be processed simultaneously, instead of outputting a linear series of tokens that each depend on the last
imbnwa•11mo ago
Need a text diffusion model to output a version of Eden!Eden!Eden!
woadwarrior01•11mo ago
Diffusion for text is a nascent field. There are a few pretrained models. Here's one[1], AFAIK it's currently the largest open weights text diffusion model.

[1]: https://ml-gsai.github.io/LLaDA-demo/

intalentive•11mo ago
This explanation is intuitive: https://www.youtube.com/watch?v=zc5NTeJbk-k

My takeaway is that diffusion "samples all the tokens at once", incrementally, rather than getting locked in to a particular path, as in auto-regression, which can only look backward. The upside is global context, the downside is fixed-size output.

orbital-decay•11mo ago
That a not a good intuition to have. That backwards-looking pathfinding process is actually pretty similar in both types of models - it just works along a different coordinate, crude-to-detailed instead of start-to-end.
intalentive•11mo ago
Good point.
petermcneeley•11mo ago
This page is full of text. I am guessing the author (Sean Goedecke) is a language based thinker.
JoeDaDude•11mo ago
Coincidentally, I was just watching this explanation earlier today:

How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile

https://www.youtube.com/watch?v=1CIpzeNxIhU

bicepjai•11mo ago
>>>CLASSIFIER-FREE GUIDANCE … During inference, you run once with a caption and once without, and blend the predictions (magnifying the difference between those two vectors). That makes sure the model is paying a lot of attention to the caption.

Why is this sentence true ? “That makes sure the model is paying a lot of attention to the caption.”

noodletheworld•11mo ago
Mmm… how is a model with a fixed size, let’s say, 512x512 (ie. 64x64 latent or whatever), able to output coherent images at a larger size, let’s say, 1024x1024?

Not in a “kind of like this” kind of way: PyTorch vector pipelines can’t take arbitrary sized inputs at runtime right?

If you input has shape [x, y, z] you cannot pass [2x, 2y, 2z] into it.

Not… “it works but not very well”; like, it cannot execute the pipeline if the input dimensions aren’t exactly what they were when training.

Right? Isn’t that how it works?

So, is the image chunked into fixed patches and fed through in parts? Or something else?

For example, (1) this toy implementation resizes the input image to match the expected input, and always emits an output of a specific fixed size.

Which is what you would expect; but also, points to tools like stable diffusion working in a way that is distinctly different to what the trivial explanation tend to say does?

[1] - https://github.com/uygarkurt/UNet-PyTorch/blob/main/inferenc...

swyx•11mo ago
> That last point indicates an interesting capability that diffusion models have: you get a kind of built-in quality knob. If you want fast inference at the cost of quality, you can just run the model for less time and end up with more noise in the final output2. If you want high quality and you’re happy to take your time getting there, you can keep running the model until it’s finished removing noise.

not quite right... anyone who has run models for >100 steps knows that you can go too far. whts the explanation of that?