Submitted by AutoModerator t3_11pgj86 in MachineLearning
LeN3rd t1_jcgqzvo wrote
Reply to comment by DreamMidnight in [D] Simple Questions Thread by AutoModerator
If you have more variables than datapoints, you will run into problems, if your model starts learning by heart. Your models overfits to the training data: https://en.wikipedia.org/wiki/Overfitting
You can either reduce the number of parameters in your model, or apply a prior (a constraint on your model parameters) to improve test dataset performance.
Since neural networks (the standard emperical machine learning tools nowadays) have a structure for their parameters, this means they can have much more parameters than simple linear regression models, but seem to run into problems, when the number of parameters in the network matches the number of datapoints. This is just empirically shown, i do not know any mathematical proves for it.
DreamMidnight t1_jchxtfy wrote
Yes, although I am specifically looking into the reasoning of "at least 10 datapoints per variable."
What is the mathematical reasoning of this minimum?
LeN3rd t1_jcislrk wrote
I have not heard this before. Where is it from? I know that you should have more datapoints than parameters in classical models.
DreamMidnight t1_jcrh53z wrote
Here are some sources:
https://home.csulb.edu/~msaintg/ppa696/696regmx.htm
https://developers.google.com/machine-learning/data-prep/construct/collect/data-size-quality (order of magnitude in this case means 10)
LeN3rd t1_jct6arv wrote
Ok, so all of these are linear ( logistics) regression models, for which it makes sense to have more data points, because the weights aren't as constraint as in a convolutional layer I.e. but it is still a rule of thumb, not exactly a proof.
VS2ute t1_jd1irhb wrote
If you have random noise on a variable, it can have a substantial effect when too few samples.
Viewing a single comment thread. View all comments