Submitted by thanderrine t3_zc0kco in MachineLearning
Lucas_Matheus t1_iyxfcvy wrote
To me this seems more related to the early-stopping parameters. Important questions are:
- What's the minimal percentage drop in validation loss you accept? If it's too high (20%), you don't train much. If it's too low (0.05%), it won't stop training.
- What interval of validations are you using? If you check for earlystop every validation, an erratic loss may make the check inconsistent. If it takes too long to check again, the model may already be overfitting
Viewing a single comment thread. View all comments