Viewing a single comment thread. View all comments

Tiny_Arugula_5648 t1_isj2xgz wrote

Well it doses depend on what typed of models you want to build and how much data you’ll be using… but the general rule of thumb is always go with the most powerful GPU and largest amount of ram you can afford.. having to little processing power means you’ll wait around much longer for everything (training, predicting) with to little ram many of the larger models out like BERT might not run at all..

Or just get a couple of colab accounts.. I get plenty of v100 & even a100 time, by switching between different accounts

4

Varterove_muke t1_iskhc9r wrote

How you get bypass Google "noticing" switching between accounts. I've tried it on one account to trained and saved model and transfer model to another account, when I tried to continue training, it bricked me for TPU runtime environment.

1

Tiny_Arugula_5648 t1_isx67jr wrote

Doubtful you got “bricked” or that Google caught you switching accounts… more likely TPUs are in a lot of demand and are expensive and the Colab service is a best effort to give you unused resources and there just wasn’t any TPUs available…

1