Viewing a single comment thread. View all comments

Agreeable-Run-9152 t1_j33wlnt wrote

Lets think about a dataset consisting of only one image x and that the optimization process is known and deterministic.

Then given the weights of the diffusion model, and the optimization procedure P(theta_0,t, x) which maps the initial weights theta_0 to theta_t after t steps trained on image x, this problem would be:

Find x of |Theta_t - P(theta_0,t,x) | = 0 for all times t.

I would IMAGINE (i am not sure) that for enough times t, we get a unique solution x.

This argument should even hold for datasets consisting of more images.

2

Agreeable-Run-9152 t1_j33xcmm wrote

Note that this argument really isnt about Diffusion or generative models but about optimization. I know my fair Share of generative modelling, but this Idea is a lot more general and might have been popped up somewhere else in optimization/inverse Problems?

1

fakesoicansayshit t1_j3db14h wrote

If I train the model on a 1x1 pixel set of images that only have 2 states, black or white, and two labels, black or white, then shouldn't prompting 'black' generate a 1x1 black image 100% of the time?

1

Agreeable-Run-9152 t1_j3dbcyl wrote

Yeah thats true. My comment relates to unconditional diffusion Models a la Song and not stable Diffusion. The Argument might be adapted for conditional Generation.

2