DevarshTare

DevarshTare OP t1_j9b4u7s wrote

Thats interesting. I was considering that purchase since it makes sense to run larger datasets or models on the rtx 3060. But the Tensor cores were significantly lower. The GPU would definitely run much larger models but at a lower speed I assume?

How has your experience been with larger models? Especially video or image based models ?

2

DevarshTare t1_j95ct2p wrote

What matters while running models?

hey guys, I'm new to machine learning and just learning from the basics. I am planning to buy a GPU soon for running pre-built models from google colab.

My question is after you build a model what matters for the models runtime? Is it the Memory, the bandwidth or the cuda core you utilize?

Basically what makes an already trained model run faster when using in application? I can imagine it may vary from application to application, but just wanted to learn what matters the most when running pre trained models?

1