VAEs are running under the hood in stable diffusion. Instead of denoising a 512x512x3 image directly, the image is encoded with a VAE to a smaller latent space (i think 64x64x4). The denoising steps happen in the latent space, and finally the VAE decodes the result back to color space. This is how it can run relatively quickly and on machines that don't have tons of VRAM.
So it's not necessarily the case that these techniques die. We can learn and incorporate them in larger models.
Ramys t1_j3pcxd1 wrote
Reply to comment by rodeowrong in [R] Diffusion language models by benanne
VAEs are running under the hood in stable diffusion. Instead of denoising a 512x512x3 image directly, the image is encoded with a VAE to a smaller latent space (i think 64x64x4). The denoising steps happen in the latent space, and finally the VAE decodes the result back to color space. This is how it can run relatively quickly and on machines that don't have tons of VRAM.
So it's not necessarily the case that these techniques die. We can learn and incorporate them in larger models.