Viewing a single comment thread. View all comments

dasayan05 t1_ivpmx7r wrote

There is no way to feasibly compute what you are asking for.

Diffusion Models (in fact any modern generative model) are defined on continuous image-space, i.e. a continuous vector of 512x512 length. This space is not discrete, so there isn't even any notion of "distinct images". A tiny continuous change can lead to another plausible image and there are (theoretically) infinitely many tiny change you can apply on an image to produce another image that looks same but isn't the same point in image space.

The (theoretically) correct answer to your question would be that there are infitiely many images you can sample from a given generative model.

17

tripple13 t1_ivqpwft wrote

This.

I guess that's whats remarkably fascinating by these models.

Albeit, in essence, you are putting a prior on the training set, thus there should be some limit to the manifold from which samples are generated.

6