Comments

You must log in or register to comment.

Education-Sea t1_iu4txlg wrote

> PFGMs constitute an exciting foundation for new avenues of research, especially given that they are 10-20 times faster than Diffusion Models on image generation tasks, with comparable performance.

Oh this is great.

36

ebolathrowawayy t1_iu5ayqu wrote

We're approaching real time with a 4090 (35 milliseconds if 20x perf gain!). Exciting! Just hope it scales up to at least 512x512.

11

SleekEagle OP t1_iu5h5tt wrote

I'm not sure how the curse of dimensionality would affect PFGMs relative to Diffusion Models, but at the very least PFGMs could be dropped in as the base model in Imagen while diffusion models are kept for the super resolution chain! More info on that here or more info on Imagen here (or how to build your own Imagen here ;) ).

2

blueSGL t1_iu4z8up wrote

skimmed the paper and might have missed it, does it say if this is more or less VRAM efficient?

5

dasnihil t1_iu51u8k wrote

skimmed the paper and figured most of this math is beyond me, but it's exciting nonetheless.

12

SleekEagle OP t1_iu55u04 wrote

The deep dive section gives an overview of Green's functions! Don't be intimidated by the verbiage, the central ideas are not too complicated :)

If you have taken a multivariable calculus class then most of it should make sense

13

SleekEagle OP t1_iu569xx wrote

I don't think the paper explicitly says anything about this, but I would expect them to be similar. If anything I would imagine they would require less memory, but not more. That having been said, if you're thinking of e.g. DALL-E 2 or Stable Diffusion, those models also have other parts that PFGMs don't (like text encoding networks), so it is completely fair that they are larger!

4

HydrousIt t1_iu6m0g8 wrote

A flow model has more VRAM efficiency and is quicker at image generation, although this is sometimes at the cost of having an inferior image quality to GANs in terms of realism.

1

SleekEagle OP t1_iu56cs1 wrote

Note that PFGMs are not text-conditioned yet! There's still work to be done there :)

4

Llort_Ruetama t1_iu5wd9x wrote

Reading the title made me realize how insane it is, that we're able to pass electricity through sand in such a way that it generates art (AI Generated art)

7

cy13erpunk t1_iu60x0m wrote

ELI5 plz?

6

HydrousIt t1_iu6mdk9 wrote

A flow model is a type of generative AI. It is a method of unsupervised learning, meaning no labels are used for the prediction. A flow model uses a "flow" model, similar to the flow of water, to generate data from an assumed distribution. They are less VRAM intensive and faster to generate images, even though GANs are generally more realistic, with more details in the generated images. Anyone feel free to correct me and also ask more

8

SleekEagle OP t1_iuedbiw wrote

Just to add - PFGMs are best in class for flow models. They perform comparably to GANs on the datasets used in the paper, which is pretty exciting.

2

SleekEagle OP t1_iued3mr wrote

To generate data, you need to know the probability distribution of a dataset. This is in general unknown. The method called "normalizing flows" starts with a simple distribution that we do know exactly, and learns how to turn the simple distribution into the data distribution through a series of transformations. If we know these transformations, then we can generate data from the data distribution by sampling from the simple distribution and passing it through the transformations.

Normalizing flows are a general approach to generative AI - how to actually learn the transformations and what they look like depends on the particular method. With PFGMs, the authors find that the laws of physics define these transformations. If we start with a simple distribution, we can transform it into the data distribution by imagining the data points are electrons and moving them according to the electric field they generate.

2