Submitted by twocupv60 t3_xvem36 in MachineLearning
suflaj t1_ir1fjgt wrote
Reply to comment by neato5000 in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
This is not in practice true for modern DL models, especially those trained with modern optimization methods, like Adam(W). Adam(W) can have optimal performance at the start but then it's anyone's game till the end of the training.
In other words, not only will the optimal hyperparameters probably be different, because you need to change to SGD to reach max performance, you will have to retune the hyperparameters you already accepted as optimal. Successful early training only somewhat guarantees you won't diverge, but to end up with the best final weights you'll have to do additional hyperparameters search (and there is no guarantee your early training checkpoint will lead you to the best weights in the end either).
red_dragon t1_ir3t4b6 wrote
I'm running into this problem with Adam(W). Are there any suggestions on how to avoid this. Many of my experiments start off better than baseline, but ultimately do worse.
suflaj t1_ir4ow8t wrote
Switch to SGD after 1 epoch or so
But if they do worse than the baseline something else is likely the problem. Adam(W) does not kill performance, it just for some reason isn't as effective as reaching the best final performance as simpler optimizers.
Viewing a single comment thread. View all comments