Submitted by grid_world t3_y45kyh in deeplearning
For a VAE architecture and a dataset, say CIFAR-10, if the hidden/latent space is intentionally kept large to (say) 1000-d, I am assuming that the VAE will automatically not use the extra variables/dimensions in latent space which it does not need. The unneeded dimensions don't learn anything meaningful and therefore remain a standard, multivariate, Gaussian distribution. This serves as a signal that such dimensions can safely be removed without significantly impacting the model's performance.
I have implemented quite a few of them which can be referred to here.
Am I right with my hypothesis? Is there any research paper substantiating my hand wavy hypothesis?
The_Sodomeister t1_iscenee wrote
*If* your hypothesis is true (and I don't have enough direct experience with VAE to say for certain), then how would you distinguish those layers which are outputting approximately-Gaussian noise from the layers which are outputting meaningful signals? Whose to say that the meaningful signal doesn't also appear approximately Gaussian? Or at least sufficiently Gaussian to be not easily distinguishable from the others.
While I wouldn't go so far as to say that your hypothesis "doesn't happen", I also know from personal experience with other networks that NN models will tend to naturally over-parameterize if you let them. Regularization methods don't usually prevent the model from utilizing extra dimensions when it is able to, and it's not always clear whether the model could achieve the same performance with fewer dimensions vs. whether the extra dimensions are truly adding more representation capacity.
If some latent dimensions truly aren't contributing at all to the meaningful encoding, then I would think you could more likely identify this from looking at the weights in the decoder layers (as they wouldn't be needed to reconstruct the encoded input). I don't think this is as easy as it sounds, but I find it more plausible then determining this information strictly from comparing the distributions of the latent dimensions.