Submitted by Beneficial_Law_5613 t3_z3ik75 in MachineLearning
jellyfishwhisperer t1_ixlzco4 wrote
Reply to comment by Beneficial_Law_5613 in [D] inference on GNN by Beneficial_Law_5613
The model is doing what you told it to. In that scenario it said keep the lane and it was right. Congrats! You should not think of the outputs as probabilities. They add to 1 but if the model has a score of 0.3 for keep lane that doesn't mean there is a 30% chance it should keep the lane. It's just a score (unless you've built some more sophisticated probabilistic modeling into it)
As mentioned above cross entropy is a good metric. Another metric you may consider is a ROC curve. It will show performance across thresholds. Maybe 0.5 as a cut off isn't best?
And for what it's worth I wouldn't want to be in a vehicle that incorrectly switched lanes 7% of the time ;)
PredictorX1 t1_ixlzohu wrote
Good points! I'd also mention that, if probability estimates are desired, the numeric model outputs could be calibrated as a separate step at the end.
Beneficial_Law_5613 OP t1_ixm3fpz wrote
Yes but in some cases when the car should keep the lane(or is keeping the lane/its not making any lane change) I get a 76% that it should make a lane change. Thats why I am confused, and for more information: Pred=model(data) PredS = nn.Sigmoid(Pred)*100 Print(PredS) # and here when I give the data of a car that is not making a lane change i get 76% for all its data points/frames. But being honest I don't know if this 76% is for lane changing or lane keeping.
jellyfishwhisperer t1_ixm5zz6 wrote
I'd make sure you know what outputs go with what prediction. Metrics can come after that.
Viewing a single comment thread. View all comments