jellyfishwhisperer

jellyfishwhisperer t1_ixlzco4 wrote

The model is doing what you told it to. In that scenario it said keep the lane and it was right. Congrats! You should not think of the outputs as probabilities. They add to 1 but if the model has a score of 0.3 for keep lane that doesn't mean there is a 30% chance it should keep the lane. It's just a score (unless you've built some more sophisticated probabilistic modeling into it)

As mentioned above cross entropy is a good metric. Another metric you may consider is a ROC curve. It will show performance across thresholds. Maybe 0.5 as a cut off isn't best?

And for what it's worth I wouldn't want to be in a vehicle that incorrectly switched lanes 7% of the time ;)

2

jellyfishwhisperer t1_iuisl72 wrote

That's about right. Convolution priors in particular lend themselves to edge detection. CV xai is weird in general though so I've stepped back a bit. Is a good explanation one that looks good or one that is faithful to the model or what? Everyone disagrees. So Ive moved to inputs with interpretable features (text, tables, science, etc).

2