ContributionWild5778

ContributionWild5778 t1_j5g6sio wrote

This! I would just add that you can never find the exact reason as to why your training from scratch is giving less accuracy. Do you have enough data for all the neurons to learn the features ? Can you cross validate the validation loss of your dataset and pre-trained one ? Did you try removing/adding a dense layer to check how the performance is changed ?

1

ContributionWild5778 t1_iw97xid wrote

I believe that is an iterative process when doing transfer learning. First you will always freeze the top layers because low level feature extraction is done over there (extracting lines and contours). Unfreeze the last layers and try to train those layers only where high level features are extracted. At the same time it also depends on how different the new dataset is using which you are training the model. If it contains similar characteristics/features freezing top layers would be my choice

1