Viewing a single comment thread. View all comments

Lucas_Matheus t1_iyxfcvy wrote

To me this seems more related to the early-stopping parameters. Important questions are:

  1. What's the minimal percentage drop in validation loss you accept? If it's too high (20%), you don't train much. If it's too low (0.05%), it won't stop training.
  2. What interval of validations are you using? If you check for earlystop every validation, an erratic loss may make the check inconsistent. If it takes too long to check again, the model may already be overfitting
1