Submitted by olmec-akeru t3_z6p4yv in MachineLearning
new_name_who_dis_ t1_iy84a83 wrote
Reply to comment by olmec-akeru in [D] What method is state of the art dimensionality reduction by olmec-akeru
It’s not really luck. There is variation in sizes of noses (it’s one of the most varied features of the face) and so that variance is guaranteed to be represented in the eigenvectors.
And beta-VAEs are one of the possible things you can try to get a disentangled latent space yes, although they don’t really work that well in my experience.
olmec-akeru OP t1_iy8ajq0 wrote
> the beauty of the PCA reduction was that one dimension was responsible for the size of the nose
You posit that an eigenvector will represent the nose when there are meaningful variations of scale, rotation, and position?
This is very different to saying all variance will be explained across the full set of eigenvectors (which very much is true).
new_name_who_dis_ t1_iy8b0jr wrote
It was just an example. Sure not all sizes of nose are found along the same eigenvector.
Viewing a single comment thread. View all comments