Viewing a single comment thread. View all comments

AKavun OP t1_j3btlem wrote

I also have a validation accuracy metric of around %50 which is basically the expected value of a random variable.

I removed the weight decay to keep things simpler and adjusted the learning rate to 0.0003. I will update this thread on the results.

Thank you for taking the time to help

1

suflaj t1_j3bubtm wrote

Another problem you will likely have is your very small convolutions. Basically, output channels of 8 and 16 are probably only enough to solve MNIST. You should then probably use something more like 32 and 64, and use larger kernels and strides to hopefully reduce reliance on the linears to do the work for you.

Finally, you are not using nonlinear activations between layers. Your whole network essentially acts like one smaller convolutional layer with a flatten and softmax.

1