Viewing a single comment thread. View all comments

new_name_who_dis_ t1_iy4ol7g wrote

It depends on what data you're working with and what you're trying to do. For example for me, I've worked a lot with 3d datasets of meshes of faces and bodies that are in correspondence. And I actually used autoencoders to compress them to the same dimensions as PCA and compared the two.

Basically with the network I'd get less error on reconstruction (especially at lower dimensions). However, the beauty of the PCA reduction was that one dimension was responsible for the size of the nose on the face, another was responsible for how wide or tall the head is, etc.

And you don't get such nice properties from a fancy VAE latent space. Well you can get a nice disentangled latent space but they don't happen for free usually, you often need to add even more complexity to get so nice and disentangled. With PCA, it's there by design.

7

olmec-akeru OP t1_iy7ai6s wrote

>beauty of the PCA reduction was that one dimension was responsible for the size of the nose

I don't think this always holds true. You're just lucky that your dataset contains confined variation such that the eigenvectors represent this variance to a visual feature. There is no mathematical property of PCA that makes your statement true.

There have been some attempts to formalise something like what you have described. The closest I've seen is the beta-VAE: https://lilianweng.github.io/posts/2018-08-12-vae/

2

new_name_who_dis_ t1_iy84a83 wrote

It’s not really luck. There is variation in sizes of noses (it’s one of the most varied features of the face) and so that variance is guaranteed to be represented in the eigenvectors.

And beta-VAEs are one of the possible things you can try to get a disentangled latent space yes, although they don’t really work that well in my experience.

1

olmec-akeru OP t1_iy8ajq0 wrote

> the beauty of the PCA reduction was that one dimension was responsible for the size of the nose

You posit that an eigenvector will represent the nose when there are meaningful variations of scale, rotation, and position?

This is very different to saying all variance will be explained across the full set of eigenvectors (which very much is true).

1

new_name_who_dis_ t1_iy8b0jr wrote

It was just an example. Sure not all sizes of nose are found along the same eigenvector.

1