tsgiannis
tsgiannis t1_j8wgwsj wrote
Reply to comment by [deleted] in My Neural Net is stuck, I've run out of ideas by [deleted]
I had serious issues with the whole training of VGG...this fixed it...
give it a spin.
tsgiannis t1_j8w2s8i wrote
Try instead of the rescale = 1. /255 to use its own preprocess function and report back
tsgiannis OP t1_j4wk889 wrote
Reply to comment by nibbajenkem in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
>Less data means more underspecification and thus the model more readily gets stuck in local minima
Probably this is the answer to the my "why".
tsgiannis OP t1_j4vw83q wrote
Reply to comment by Present-Ad-8531 in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
I know and this is what I use but ....
Just picture this in your mind..
You want to classify for example sports cars...shouldn't be reasonable to have images of sports cars and feed it to a model and let it learn..compared to images of frogs, butterflies ..etc (imagenet)
tsgiannis OP t1_j4vvxt3 wrote
Reply to comment by jsxgd in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
from scratch I mean I take the implementation of a model (just pick any) from articles and github pages, I copy paste it and I feed my data.
There is always a big accuracy difference no matter what...at first I thought it was my mistake because I always tinker what I copy but....
tsgiannis OP t1_j4v447x wrote
Reply to comment by XecutionStyle in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
No changes on the pretrained model besides removing the top layer.
I am aware that unfreezing can cause either good or bad results
tsgiannis OP t1_j4v3ysi wrote
Reply to comment by loopuleasa in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
Now that's something to discuss..
>more resource
Now this is something well known...so skip it for now
>better tuning
This is the interesting info
What exactly do you mean on this... is it right to assume that all the papers that explain the architecture lack some fine details or is it something else.
tsgiannis OP t1_j4uukd9 wrote
Reply to comment by XecutionStyle in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
What exactly do you mean ...by "fixing weights" ?
The pretrained carries the weights from ImageNet and that's all .. if I unfreeze some layers it will get some more accuracy
But the "from scratch" starts empty.
tsgiannis OP t1_j4uu9q4 wrote
Reply to comment by Buddy77777 in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
Thanks for the reply and I agree with you but...
Right now I am seeing the training of my model....it simply found a converging point and it's stuck around 86%+ training accuracy and 85%+ validation accuracy ... and I have observed this behavior more than once... so I am just curious.
Anyway probably the best answer is that it doesn't get enough features and its stuck ...because its unable to make some crucial separations.
Submitted by tsgiannis t3_10f5lnc in deeplearning
tsgiannis t1_j4fcuo3 wrote
I meant on the betting site
tsgiannis t1_j4c3d7c wrote
You can always simulate
tsgiannis t1_j4c2r0d wrote
No...as I wrote take a previous year's complete data.. Let's take 2021 season..and you have gone back to 2021...you have absolutely no knowledge of the outcomes of games..
The season starts and you are all fired up to earn some money... You wait until a reasonable amount of games are played... around the 60% I reckon is a good percentage So you start training the model. You start with a base amount of cash...e.g $100 You predict for the coming 5 - 10 games...how did the model performed. , Have you made a profit or not.. again..the next 5-10 games..You play until either you run out of money or the season ends. If you run out of money..the bitter truth..back to the drawing board If the season ends.. measure your money.its around $100 - $120.. well at least you didn't lose..but it was tight $121 - $150 maybe you have something $151-$200 maybe you should give it a go > $201 lets make some money 🤑
tsgiannis t1_j412nv9 wrote
Reply to [P] Looking for someone with good NN/ deep learning experience for a paid project by CuriousCesarr
The real question is : can you get accurate data...you need to scan pics for a gazillion different kind of things and in the end provide a number..but in order for this to work you need a ton of data to provide the correct object matches
tsgiannis t1_j3vx1lm wrote
If no matter what you get 75% accuracy then you can consider yourself a game breaker But do test the model and when I say test I don't mean on a static subset you have tested again and again. For example - haven't read the code yet - pick last year...and train your model for the 60% of the games.. lets say its 100 games and you have trained for 60.. did you managed to predict accurately the 61st,62nd....70 (lets take in batches of 10)...now the next batch..are you still carrying an accuracy over 75% ? Like you I had a model for baseball that was around 60 % accurate...but when I put it on the test it failed hard For now such a high accuracy seems a good starting point but do test
tsgiannis t1_jb40tzg wrote
Reply to Should I choose Colab or RTX3070 for deep learning? by Cyp9715
3070 should be much much faster than Colab and you have the added bonus of working with full debugging capabilities (PyCharm/Spyder...etc)
Even my 2nd hand 3050 is much faster than Colab...but it is always helpful to have a 2nd machine...so 3070 AND Colab