Submitted by Zestyclose-Check-751 t3_z5domj in MachineLearning

​

https://preview.redd.it/al3i2te52c2a1.png?width=1280&format=png&auto=webp&s=7cfb74d610d35251643e97bbf01ce123de2b4813

Hi, everyone! I invite you to read a post / tutorial about metric learning. It includes the theory overview, practical examples with illustrations and code snippets written in OpenMetricLearning (a new PyTroch-based library). As a bonus, you will learn how to train a model which performs on a SotA level using a few simple heuristics. Welcome to read!

126

Comments

You must log in or register to comment.

anonymousTestPoster t1_ixxq87i wrote

Is metric learning a new buzz word or does it represent a genuinely new step in research direction? Because the idea of vector space embedding (for whatever purpose) is not a new concept.

Of course one may not know the embedding procedure (is this what they call representation learning?), but the proposed way in which metric learning and or representation learning appears to solve this issue is by doing what seems effectively like just a grid search (which can be extended to continuous parameter spaces if necessary) of sorts over a set of possible embeddings / projections / metrics.

Of course I could be wrong and missing the point entirely, since I only very, very quickly skimmed a few paragraphs here or there. Please correct me if I am wrong.

7

larryobrien t1_ixxtepk wrote

I think it's just the emergence of the term for "that kind" of approach. I use metric learning for low k-shot reidentification and it's very well-trodden ground from a research perspective, but it's helpful to distinguish it from a plain-vanilla classification approach.

5

VenerableSpace_ t1_ixxy2dy wrote

Metric learning isnt really a new buzz word, its been in use for these types of approaches for several years now. Its a good framework to collectively think about these approaches but there is some overlap; eg. self-attention can be viewed as a form of metric learning as a stand-alone layer, eg. in ViT How to relate the input patch embeddings to one another s.t we can discriminate between the classes?

2

Zestyclose-Check-751 OP t1_ixym3ea wrote

>How to relate the input patch embeddings to one another s.t we can discriminate between the classes?

Hi, metric learning is an umbrella term like self-supervised learning, detection, and tracking. So, nobody pretends that the domain is new. But there are new approaches in this domain which are also mentioned in the article (like Hyp-ViT). Finally, despite the domain is not new, people still need some tools and tutorials to solve their problems.

0

anonymousTestPoster t1_ixyvrwy wrote

> Hi, metric learning is an umbrella term like self-supervised learning, detection, and tracking.

This is basically my point, what is the need for an umbrella term? There is an infinitude of ways in which sub topics can be linked together, rather than having:

> people still need some tools and tutorials to solve their problems.

Isn't it better that people appeal towards self-supervised learning, detection, and tracking directly depending on the problem at hand? These sub-topics are sufficiently different that they should be considered quite separately. Even for things like "supervized learning" we consider the sub-problems of regression and classification very differently. Although there is theoretical interest to combine both topics in the discussion of similarities, practically speaking one would choose to take a "classification" or a "regression" task for the specific problem, so that it is ultimately not useful to consider a practical problem as being of "supervized" type, apart from maybe 1-2 sentences in an introduction section of the problem.

1