Submitted by Infamous_Age_7731 t3_107pcux in deeplearning
agentfuzzy999 t1_j3pbk38 wrote
I have trained locally and in the cloud on a variety of cards and server arch’s, depending on what model you are training it could be for a huge variety of reasons, but if you can fit the model on a 3080 you really aren’t going to be taking advantage of the A100s huge memory, the higher clock speed of the 3080 might simply suit this model and parameter set better.
Infamous_Age_7731 OP t1_j3qrctv wrote
Thanks for your advice. FYI, I use the A100 for larger models and/or longer inputs/outputs that don't fit to my 3080.
Viewing a single comment thread. View all comments