Viewing a single comment thread. View all comments

Quaxi_ t1_is7gnsf wrote

No definitely - GANs can still fail and they are much less stable than Diffusion models. But GANs have still enjoyed a huge popularity despite that and research has found ways to mitigate it.

I just think it's not the main reason why diffusion models are gaining traction. If it was we probably would have seen a lot more of Variational Autoencoders. My work is not at BigGAN or DALLE2 scale though so might indeed miss some scaling aspect of this. :)

2

Atom_101 t1_is7ldte wrote

I think VAEs are weak not because of scaling issues but , because of an overly strong bias that the latent manifold has to be a Gaussian distribution with a diagonal covariance matrix. This problem is reduced using things like variational quantization. Dalle-1 actually used this, before DMs came to be. But even then, I believe they are too underpowered. Another technique of image generation is normalising flows which also require heavy restrictions on model architecture. GANs and DMs are much more unrestricted and can model arbitrary data distributions.

Can you point to an example where you see GANs perform visibly worse? Although we can't really compare quality between sota GANs and sota DMs. The difference in scale is just too huge. There was a tweet thread recently, regarding Google imagen iirc, which showed that increasing model size drastically improves image quality for text-to-image DMs. Going from 1B to 10B params showed visible improvements. But if you compare photorealistic faces generated by stable diffusion and say stylegan3, I am not sure you would be able to see differences.

2