GullibleBrick7669

GullibleBrick7669 t1_iwc3fyj wrote

From my understanding and performance on a recent work of mine (similar problem), augmenting just the training data is beneficial in interpreting the validation accuracy. In the sense, validation data quite literally functions as the test data with no alterations. So, when you plot the loss on training and validation, that should give you an understanding of how well the model will perform on the test data. So, for my problem I augmented just the training data and left validation and test data as is.

Also looking at your plots, it could also be a sign of unrepresentative validation data set. Ensure that there are enough data samples for each class if you find that they are not, try performing the same augmentations that you do on the training data on the validation data as well to generate more samples.

3