Submitted by uwashingtongold t3_100bf73 in MachineLearning
Say we have two distributions of segmentation annotations (i.e., a bunch of segmentation maps) which I want to establish are 'similar'. To be more specific, I'm working on a research project where we have in-house annotations for images from a public dataset and I want to quanitatively establish that our annotations and the dataset annotations differ in similar ways, or that our annotations 'fall into the distribution of the current dataset'.
(I'm aware that I can measure similarity between distributions with a measure like KL-divergence, but what I'm not sure about is how I would establish what level is 'similar enough'.)
yeolj0o t1_j2hkerk wrote
I was having the exact thought about comparing segmentation labels. The best "deep learning style" approach I've come up with (and also not satisfying) is running a semantic image synthesis model (e.g., SPADE, OASIS, PITI) and comparing FIDs. A better approach for my case (I am working with cityscapes) where the scene outline is mostly fixed, is to utilize height priors and compare KL divergence or FSD according to the height (bottom, mid, top).