Viewing a single comment thread. View all comments

jellyfishwhisperer t1_iriddyc wrote

Regularization and drop out helps with overfitting. It will almost always reduce your training accuracy. What you need is a testing dataset and compare there.

11

perfopt OP t1_iridohw wrote

Thank you for the response. I am breaking my data to train and validation sets. Do you mean another set for test?

The baseline is overfitting - test accuracy is really high and val accuracy is much lower. That is why I added L2+Dropout

Since the validation accuracy is still very low (52%) should I not focus on improving that?

1

manuLearning t1_irie761 wrote

I had always good experiences with dropout. Try to put a dropout layer of around 0.75 after your first layer and onedropout layer before your last layer. You can also put a light 0.15 layer before your first layer.

How similar is the test and val set?

2

perfopt OP t1_iriens4 wrote

For creating test and val I used test_train_spilt from sklearn

I'll I manually examine it.

But in general shouldn't the distribution be OK?

inputs_train, inputs_test, targets_train, targets_test = train_test_split(inputs, targets, test_size=0.1)
1

manuLearning t1_irij2hl wrote

A rule of thumb is to take around 30% as val set

1

perfopt OP t1_irij7vh wrote

I tried that as well with similar results when adding L2+dropout

0