Viewing a single comment thread. View all comments

LeN3rd t1_jcgqzvo wrote

If you have more variables than datapoints, you will run into problems, if your model starts learning by heart. Your models overfits to the training data: https://en.wikipedia.org/wiki/Overfitting

You can either reduce the number of parameters in your model, or apply a prior (a constraint on your model parameters) to improve test dataset performance.

Since neural networks (the standard emperical machine learning tools nowadays) have a structure for their parameters, this means they can have much more parameters than simple linear regression models, but seem to run into problems, when the number of parameters in the network matches the number of datapoints. This is just empirically shown, i do not know any mathematical proves for it.

1

DreamMidnight t1_jchxtfy wrote

Yes, although I am specifically looking into the reasoning of "at least 10 datapoints per variable."

What is the mathematical reasoning of this minimum?

1

LeN3rd t1_jcislrk wrote

I have not heard this before. Where is it from? I know that you should have more datapoints than parameters in classical models.

1

LeN3rd t1_jct6arv wrote

Ok, so all of these are linear ( logistics) regression models, for which it makes sense to have more data points, because the weights aren't as constraint as in a convolutional layer I.e. but it is still a rule of thumb, not exactly a proof.

1

VS2ute t1_jd1irhb wrote

If you have random noise on a variable, it can have a substantial effect when too few samples.

1