Dmytro_P

Dmytro_P t1_iyjw1xh wrote

400 hours is less than 3 weeks of training, if you plan to have the system loaded for at least half a year, building your own system may be quite a bit cheaper.

I have built a similar 3000 series system as well (with the reduced power limit to around 300W per GPU, the performance impact is not as large), renting for the time it was used would cost me significantly more.

2

Dmytro_P t1_isnljcv wrote

It depends on how large and diverse your dataset is, but in most cases you should. You'd see an even larger difference between the train and test sets.

You can also try to use multiple folds, to train model 5 times for example with the different test set, to check if the test set you selected accidentally contains simpler samples.

2