Submitted by vagartha t3_10757aq in deeplearning
Hi all!
I'm relatively new to deep learning, so had some questions. Overall, what does it mean if your model's training and validation accuracy doesn't improve between epochs? Does this mean your model is not complex enough or the data insufficient? Or it the objective I'm to predict too complicated?
For context, I'm trying to predict NBA games using a combination of LSTMs and MLPs. I have the last 10 games for a team that I'm feeding into 1 LSTM and the last 3 meetings between the relevant teams into another LSTM. I also have the records of the 2 teams at the time of the game.
I'm combining all of these into a fully connected layer and classifying the home team as winning or not.
I can't seem to get above 75% accuracy. More importantly, it doesn't seem to budge between epochs! Any ideas what could be going on here?
Here is also a link to my colab notebook if anyone can help!
https://colab.research.google.com/drive/1VAG5EXLXq9To7wtZVOT3VoX3X2mvILya?usp=sharing
​
Thanks in advance!
junetwentyfirst2020 t1_j3kmct1 wrote
Have you tired to overfit on a single piece of data to ensure that your model can actually learn? You should be able to get effectively 100% acc overfitting. If you can’t do this then you have a problem