geekfolk

geekfolk t1_je3qyfr wrote

I don’t know about this model, but GANs are typically smaller than diffusion models in terms of num of params. The image structure thing probably has something to do with the network architecture since GANs rarely use attention blocks and the network architecture of diffusion models is more hybrid (typically CNN + attention)

1

geekfolk t1_je3io3b wrote

using pretrained models is kind of cheating, some GANs use this trick too (projected GANs). But as a standalone model, it does not seem to work as well as SOTA GANs (judged by the numbers in the paper)

​

>Still, it's a lot easier than trying to solve any kind of minimax problem.

This is true for GANs in the early days; however, modern GANs are proved to not have mode collapse and the training is proved to converge.

>It's actually reminiscent of GANs since it uses pre-trained networks

I assume you mean distilling a diffusion model in the paper. There have been some attempts to combine diffusion and GANs to get the best of both worlds but afaik none involved distillation, I'm curious if anyone has tried distilling diffusion models into GANs.

0