Submitted by Blutorangensaft t3_11qejcz in MachineLearning

What is the current state-of-the-art when it comes to the generalisation ability of autoencoders? I have been working with text autoencoders for some time and, although they work well on the training data, they generalise very poorly to unseen sentences (as, for example, noted here: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=there+and+back+again+autoencoder&btnG=#d=gs_qabs&t=1678725350369&u=%23p%3DksKOTTf1c1IJ). How do image autoencoders do with unseen images? What research efforts are underway to improve generalisation ability?

7

Comments

You must log in or register to comment.

currentscurrents t1_jc31c23 wrote

Vanilla autoencoders don't generalize well, but variational autoencoders have a much better structured latent space and generalize much better.

Generalization really comes down to inductive biases. Autoencoders are downscalers -> upscalers, so they have an inductive bias towards preserving large features in the data and discarding small details. This is reasonable for images but not so much for text.

But autoencoders are just one example of an information bottleneck model, which includes everything from autoregressive language models to diffusion models to U-Nets. (U-Nets are basically just autoencoders with skip connections!) They all throw away part of the data and learn how to reconstruct it.

Different kinds of bottlenecks have different inductive biases and are better suited to different kinds of data. Next-word-prediction seems to be better suited for text because it reflects the natural flow of language.

6

speyside42 t1_jc44rbn wrote

> Vanilla autoencoders don't generalize well, but variational autoencoders have a much better structured latent space and generalize much better.

For toy problems yes, but not generally. For a generalizing Image Autoencoder, check for example ConvNextv2: https://arxiv.org/pdf/2301.00808.pdf

As a side note: The VQ-VAE from the blog post has actually really little to do with variational inference. You have basically no prior at all (uniform over all discrete latents) therefore the KL-divergence term can also be dropped. It's basically just a glorified quantized Autoencoder that could be interpreted in the language of variational models.

3

Red-Portal t1_jc4u84k wrote

what do you mean by generalizing here? Reconstruction of OOD data? Ironically, VAEs are pissing everybody because they reconstruct OOD data too well. In fact, one of the things people are dying to get to work is anomaly detection or OOD detection, but VAEs suck at it despite all attempts. Like your dog who cannot guard your house because he really likes strangers, VAEs suck at OOD detection because they reconstruct OOD too well.

5

Noddybear t1_jc509fa wrote

I spent a year with a team of engineers trying to get VAEs to work for textual anomaly detection. It didn't work that well.

3

currentscurrents t1_jc5fxq3 wrote

Wouldn't that make them great for the task they're actually learning to do: compression? You want to be able to compress and reconstruct any input data, even if less efficient for OOD data.

I do wonder why we don't use autoencoders for data compression. But this may simply be because neural networks require 1000x more compute power than traditional compressors.

1

Red-Portal t1_jc5g7ap wrote

Oh they have been used for compression. I also remember a paper on quantization, which made a buzz at the time.

1

currentscurrents t1_jc5ghbv wrote

Would love to read some research papers if you have a link!

But I mean that we do not use them for compression in practice. We use hand-crafted algorithms, mostly DEFLATE for lossless + a handful of lossy DCT-based algorithms for audio/video/images.

1

FrogBearSalamander t1_jc5vvrb wrote

> Would love to read some research papers if you have a link!

2