Viewing a single comment thread. View all comments

jackilion t1_j634fkx wrote

1

Blutorangensaft OP t1_j6356ho wrote

Compare different autoencoders in their ability to create valid language in a continuous space. Later, I want to generate sentences in its latent space by using another neural network, and have them decoded to real sentences by the autoencoder. I want the space to be smooth because the second neural net will naturally be using gradient descent, which involves infinitesimal changes. I believe this network will perform better if the changes that happen actually represent meaningful distances between real sentences.

1

jackilion t1_j63e6ah wrote

There is no reason to assume your latent space will be smooth by itself. I remember a paper for image generation that had techniques for smoothing out the latent space that can be applied during training:

https://arxiv.org/abs/2106.09016

​

It's about GANs, not autoencoders, but maybe you can find some ideas in there.

1

Blutorangensaft OP t1_j654qyd wrote

Thank you for the reference, it looks very promising. I've heard of ways to smooth the latent space through Lipschitz regularisation, but then got disappointed again when I read "ah well it's just layer normalisation". So many things in ML come in a different appearance and actually mean the same thing once you implement them.

1