anonymousTestPoster

anonymousTestPoster t1_j2r4kyj wrote

> Theoretical ML: It's BS, literally

You say this, but I would argue most of the best people in the ML industry that I have personally witness posses a strongly rich theoretical background. Of course to answer very practical questions, I wouldnt maybe go to a theory book, or look into reading a texbook on algebraic geometry..... But the minds of those well-versed in theory tend to better understand novel situations and problems very quickly, and have a very adaptable mind.

So If a theoretician can correctly transition their "post-PHD" personalities to industry, I think they stand the best chance to be one of the most valuable team players, because for example "everyone" can code, or so they say, but not everyone can understand models in the depth of level as a theoretician.

For example if something isn't working, I would rather first seek the counsel of someone with a theoretical background working in industry, rather than someone who has only ever worked in industry, unless that person is exceptionally talented, and has something like 10 years of experience.

2

anonymousTestPoster t1_iykfyen wrote

It is an interesting concept, because it looks like an anti-classifier / anti-segmenter.

Usually we want to maximize identifcation and or segmentation within an image, but now you would want to reverse the cost function in a sense, so as to minimize identifiability. The theoretical best rate that this can occur would be probably be uniform random sampling across a grid.

What you could do is have a set of images for various locations under different conditions / weather, then superimpose the camo in various orientations, and find the which camo performs best in which settings more often.

This would be the quick and dirty start approach, then you can focus in on particular use case / conditions such as the other poseter has commented on.

> varying vegetaion (sage, nothing, large deciduous trees, pines, ...). The person may be laying down in the bushes, walking down an open path, ...

4

anonymousTestPoster t1_ixyvrwy wrote

> Hi, metric learning is an umbrella term like self-supervised learning, detection, and tracking.

This is basically my point, what is the need for an umbrella term? There is an infinitude of ways in which sub topics can be linked together, rather than having:

> people still need some tools and tutorials to solve their problems.

Isn't it better that people appeal towards self-supervised learning, detection, and tracking directly depending on the problem at hand? These sub-topics are sufficiently different that they should be considered quite separately. Even for things like "supervized learning" we consider the sub-problems of regression and classification very differently. Although there is theoretical interest to combine both topics in the discussion of similarities, practically speaking one would choose to take a "classification" or a "regression" task for the specific problem, so that it is ultimately not useful to consider a practical problem as being of "supervized" type, apart from maybe 1-2 sentences in an introduction section of the problem.

1

anonymousTestPoster t1_ixxq87i wrote

Is metric learning a new buzz word or does it represent a genuinely new step in research direction? Because the idea of vector space embedding (for whatever purpose) is not a new concept.

Of course one may not know the embedding procedure (is this what they call representation learning?), but the proposed way in which metric learning and or representation learning appears to solve this issue is by doing what seems effectively like just a grid search (which can be extended to continuous parameter spaces if necessary) of sorts over a set of possible embeddings / projections / metrics.

Of course I could be wrong and missing the point entirely, since I only very, very quickly skimmed a few paragraphs here or there. Please correct me if I am wrong.

7