Viewing a single comment thread. View all comments

rodeowrong t1_j3oaq7n wrote

So, is it worth exploring or not? I don't know if I should spend 2 months trying to understand the diffusion models only to find it can never be better. Vae based models had the same fate. I was studying them and suddenly transformers took over.

5

Ramys t1_j3pcxd1 wrote

VAEs are running under the hood in stable diffusion. Instead of denoising a 512x512x3 image directly, the image is encoded with a VAE to a smaller latent space (i think 64x64x4). The denoising steps happen in the latent space, and finally the VAE decodes the result back to color space. This is how it can run relatively quickly and on machines that don't have tons of VRAM.

So it's not necessarily the case that these techniques die. We can learn and incorporate them in larger models.

5

[deleted] t1_j3opz0l wrote

I think worth looking at for sure. The math behind isn’t “that” complex and the idea is pretty intuitive in my opinion. Take that from someone who took months to wrap their head around attention as a concept lol.

3

thecodethinker t1_j3pichs wrote

Attention is still pretty confusing for me. I find diffusion much more intuitive fwiw.

2

gamerx88 t1_j3qft42 wrote

What do you mean Transformers took over? In what area or sense? You mean took over in popularity?

2