Viewing a single comment thread. View all comments

zeyus t1_j0kxp2n wrote

True, the thought did occur to me, but I thought you could train the other category with a diverse set of animals and also people, nature, cars, landscapes etc. While there are a larger infinite set of "non-dog" or "non-cat" images, it must be possible to classify features that absolutely don't indicate a dog or cat...I don't think it's the most effective method perhaps...though it would be interesting to give it a go, maybe after my exams I'll try...

I can't shake the feeling that it might be somehow informative on the classification layer, either for reducing the confidence of the other categories or weighting it somehow

1

trajo123 t1_j0l5hwj wrote

You will get some results, for sure. Depending on your application may even be good enough. But as a general probability that an image is something other than cat and dog, not really.
As other commenters have mentioned the general problem is known as OOD (out of distribution) sample detection. There are Deep Learning models which model probabilities explicitly and can in principle used for OOD sample detection - Variational Autoencoders. The original formulation of this model performs poorly in practice at OOD sample detection, but there is work addressing some shortcomings, for instance Detecting Out-of-distribution Samples via Variational Auto-encoder with Reliable Uncertainty Estimation. But with VAEs things get very mathematical, very fast.
Coming back to you initial question, no, softmax is not appropriate for "confidence", but this is an open problem in Deep Learning.

1

visarga t1_j0m7wj7 wrote

How about a bronze statue of a dog, a caricature of a cat, a fantasy image that is hard to classify, etc? Are they "other"?

1