Comments

You must log in or register to comment.

Tiny_Arugula_5648 t1_isj2xgz wrote

Well it doses depend on what typed of models you want to build and how much data you’ll be using… but the general rule of thumb is always go with the most powerful GPU and largest amount of ram you can afford.. having to little processing power means you’ll wait around much longer for everything (training, predicting) with to little ram many of the larger models out like BERT might not run at all..

Or just get a couple of colab accounts.. I get plenty of v100 & even a100 time, by switching between different accounts

4

Varterove_muke t1_iskhc9r wrote

How you get bypass Google "noticing" switching between accounts. I've tried it on one account to trained and saved model and transfer model to another account, when I tried to continue training, it bricked me for TPU runtime environment.

1

Tiny_Arugula_5648 t1_isx67jr wrote

Doubtful you got “bricked” or that Google caught you switching accounts… more likely TPUs are in a lot of demand and are expensive and the Colab service is a best effort to give you unused resources and there just wasn’t any TPUs available…

1

Downtown-Ease-8454 t1_islxn7g wrote

When purchasing a GPU for training it is necessary to ensure that we have sufficient VRAM. We will be limited to small model architectures, and limited training data due to the VRAM availability.

1

FoundationPM t1_iswlbgw wrote

I used to use 2080ti, and 3090, I think 3090 has tens of percentage advantage in performance. While for 3060 and 3090's performance difference, I didn't test.

1