Submitted by Thijs-vW t3_yta05n in deeplearning
jobeta t1_iw6zxwa wrote
Reply to comment by scitech_boom in Update an already trained neural network on new data by Thijs-vW
I don’t have much experience with that specific problem but I would tend to think it’s hard to generalize like this to “models that hit the bottom” without knowing what the validation loss actually looked like and what that new data looks like. Chances are, this data is not just perfectly sampled from the first dataset and the features have some idiosyncratic/new statistical properties. In that case, by feeding them in some way to your pre-trained model, the model loss is mechanically not in that minima it supposedly reached in the first training run anymore.
Viewing a single comment thread. View all comments