Submitted by carlml t3_xw0pql in MachineLearning

Suppose you have an image, then you pass through a VAE model and it'll give you a similar image. What happens if you feed the output as input through the VAE again? Is there any theory discussing this? has anyone done any experiments/papers?

I'll be writing some code to do this, but if anyone knows anything regarding this, please share.

Disclaimer: this is driven purely by curiosity.

3

Comments

You must log in or register to comment.

matigekunst t1_ir4g87w wrote

I tried it and sometimes ended up with saturated noise, but occasionally with something that looked like a Turing pattern

2

dasayan05 t1_ir4kr52 wrote

I don't the answer but does this "feeding back it's reconstruction" has any meaning/interpretation ?

0

sieisteinmodel t1_ir4u28y wrote

You will MCMC sample from the model. Check out the appendix of:

Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. "Stochastic
backpropagation and approximate inference in deep generative models." International conference on machine learning. PMLR, 2014.

2

MagentaBadger t1_ir4zkun wrote

Here is a paper you may be interested in about “Perceptual Generative Autoencoders”. The authors leverage the concept you describe to improve training and generative performance. 🙂

7

carlml OP t1_ir5o7mc wrote

After how many iterations? during the process, did you ever obtain an image that looked better than your original reconstruction or did it consistently get worse?

2

dasayan05 t1_ir5q57s wrote

Yes, I understand what you mean.
I am asking whether feeding back it's output has any special interpretation in terms of VAE ? Is there any rationale behind doing this ? Are you expecting something specific from this ?

0