Viewing a single comment thread. View all comments

suflaj t1_j0jbo92 wrote

Depends on what you mean by confidence. With softmax, you model probability. You can train your network to give out near 100% probabilities per class, but this tells you nothing about how confident it is.

Instead, what you could do is the following:

  • get a prediction
  • define the target of your prediction as the resolved label
  • calculate loss on this
  • now define your target as the inverse of the initial one
  • calculate the loss again
  • divide the 2nd one by the first one

Voila, you've got a confidence score for your sample. However, this will only give you a number that will be comparable to other samples. This will not give you a % of confidence. You do know, however, that the higher the confidence score, the closer the confidence is to 100%. And you know the smaller your score is, the closer the confidence to 0%. Based on the ranges of your loss, you can probably figure out how to map it to whatever range you want.

For a multi-class problem, you could just sum the loss ratio between all classes other than your predicted class over the loss of your prediction. So, if you had a classifier that classifies an imagine into dog, cat, and fish, and your softmax layer spits out 0.9, 0.09, 0.01, your confidence score would be loss(cat)/loss(dog) + loss(fish)/loss(dog).

−4