Submitted by vagartha t3_10757aq in deeplearning

Hi all!

I'm relatively new to deep learning, so had some questions. Overall, what does it mean if your model's training and validation accuracy doesn't improve between epochs? Does this mean your model is not complex enough or the data insufficient? Or it the objective I'm to predict too complicated?

For context, I'm trying to predict NBA games using a combination of LSTMs and MLPs. I have the last 10 games for a team that I'm feeding into 1 LSTM and the last 3 meetings between the relevant teams into another LSTM. I also have the records of the 2 teams at the time of the game.

I'm combining all of these into a fully connected layer and classifying the home team as winning or not.

I can't seem to get above 75% accuracy. More importantly, it doesn't seem to budge between epochs! Any ideas what could be going on here?

Here is also a link to my colab notebook if anyone can help!

https://colab.research.google.com/drive/1VAG5EXLXq9To7wtZVOT3VoX3X2mvILya?usp=sharing

​

Thanks in advance!

9

Comments

You must log in or register to comment.

junetwentyfirst2020 t1_j3kmct1 wrote

Have you tired to overfit on a single piece of data to ensure that your model can actually learn? You should be able to get effectively 100% acc overfitting. If you can’t do this then you have a problem

8

rockpooperscissors t1_j3l41yd wrote

There's some stats out there that suggest that NBA analyst are only correct 70% of the time. 75% accurate seems good

6

tsgiannis t1_j3vx1lm wrote

If no matter what you get 75% accuracy then you can consider yourself a game breaker But do test the model and when I say test I don't mean on a static subset you have tested again and again. For example - haven't read the code yet - pick last year...and train your model for the 60% of the games.. lets say its 100 games and you have trained for 60.. did you managed to predict accurately the 61st,62nd....70 (lets take in batches of 10)...now the next batch..are you still carrying an accuracy over 75% ? Like you I had a model for baseball that was around 60 % accurate...but when I put it on the test it failed hard For now such a high accuracy seems a good starting point but do test

1

vagartha OP t1_j4c0ouq wrote

So I've separated my dataset into train and validation datasets (90%, 10% split). Is this what you mean?

Or should I have a separate test dataset on top of that you think?

1

tsgiannis t1_j4c2r0d wrote

No...as I wrote take a previous year's complete data.. Let's take 2021 season..and you have gone back to 2021...you have absolutely no knowledge of the outcomes of games..

The season starts and you are all fired up to earn some money... You wait until a reasonable amount of games are played... around the 60% I reckon is a good percentage So you start training the model. You start with a base amount of cash...e.g $100 You predict for the coming 5 - 10 games...how did the model performed. , Have you made a profit or not.. again..the next 5-10 games..You play until either you run out of money or the season ends. If you run out of money..the bitter truth..back to the drawing board If the season ends.. measure your money.its around $100 - $120.. well at least you didn't lose..but it was tight $121 - $150 maybe you have something $151-$200 maybe you should give it a go > $201 lets make some money 🤑

1

vagartha OP t1_j4c36zw wrote

Haha, I live in CA so sports gambling so that's out of the question...

I was actually hoping to maybe write a paper or something and submit it to something like the Sloane conference or send it in to 538 as an add-on to my resume?

Also, my model uses data from seasons going back all the way to 2014 as of right now. Larger datasets would make a better model, right? So why not use more historical data?

1