Viewing a single comment thread. View all comments

olmec-akeru OP t1_iy7amgl wrote

I'm not sure: if you think about t-SNE its trying to minimise some form of the Kullback–Leibler divergence. That means its trying to group similar observations into the embedding space. Thats quite different to "more features into less features".

1

Dylan_TMB t1_iy7brke wrote

I would disagree. t-SNE is taking points in a higher dimensional space and is attempting to find a transformation that places the points in a lower dimensional embedding space while preserving the similarities in the original dimensional space. In the end each point will have it's original vector (more features) mapped to a lower dimensional vector (less features). The mapping is non-linear but that is all that's the result of the operation.

1

olmec-akeru OP t1_iy7i546 wrote

Heya! Appreciate the discourse, its awesome!

As a starting point, I've shared the rough description from wikipedia on the t-SNE algorithm:

> The t-SNE algorithm comprises two main stages. First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback–Leibler divergence (KL divergence) between the two distributions with respect to the locations of the points in the map. While the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this can be changed as appropriate.

So the algorithm is definitely trying to minimise the KL divergence. In trying to minimise the KLD between the two distributions it is trying to find a mapping such that dissimilar points are further apart in the embedding space.

2