Submitted by PleaseKillMeNowOkay t3_xtadfd in deeplearning
UsernameRelevant t1_iqq5hp9 wrote
> Is my second network going to perform at least as well as my first network?
Impossible to say. In general, more parameters mean that you can get a better fit, but also that the model overfits more easily.
Why don’t you compare the models on a test set?
PleaseKillMeNowOkay OP t1_iqqw34u wrote
I did. The second model performed worse. I didn't think that was possible.
SimulatedAnnealing t1_iqs94b6 wrote
The most plausible explanation is overfitting. How do they compare in terms of error in the train set?
PleaseKillMeNowOkay OP t1_iqscxo9 wrote
The simpler model had lower training loss with the same number of epochs. I tried training the second model until it had the same training loss as the first model, which took much longer. The validation did not improve and had a slight upward trend, which I know means that it's overfitting.
Viewing a single comment thread. View all comments