I am doing some applied work with CNNs in the academic world.
I have a relatively small dataset.
I am doing 10 fold stratified cross validation(?) where I do an initial test-train split, and then the data in the train split is further cross validated to a 10 fold train-validate split.
I then run the ensemble of 10 train models against the test split, and I select the results from the best performing model against the test data as the predicted values for the test data.
neriticzone t1_jd5se2v wrote
Reply to [D] Simple Questions Thread by AutoModerator
Feedback on stratified k fold validation
I am doing some applied work with CNNs in the academic world.
I have a relatively small dataset.
I am doing 10 fold stratified cross validation(?) where I do an initial test-train split, and then the data in the train split is further cross validated to a 10 fold train-validate split.
I then run the ensemble of 10 train models against the test split, and I select the results from the best performing model against the test data as the predicted values for the test data.
Is this a reasonable strategy? Thank you!