Viewing a single comment thread. View all comments

TruthAndDiscipline t1_j9avex4 wrote

I'm using a 3060 (no ti) with 12GB VRAM and train locally as well. Performance is fine, too.

2

DevarshTare OP t1_j9b4u7s wrote

Thats interesting. I was considering that purchase since it makes sense to run larger datasets or models on the rtx 3060. But the Tensor cores were significantly lower. The GPU would definitely run much larger models but at a lower speed I assume?

How has your experience been with larger models? Especially video or image based models ?

2

ggf31416 t1_j9clwen wrote

I actually have a 3060 too, in theory a 3060ti should be up to 30% faster, but most of the times the 3060 is fast enough and faster than any T4.

For making a few images on stable diffusion maybe the difference will be 15 vs 20 seconds, for running whisper on several hours of audio it could be 45 minutes vs 1 hour. The difference will only matter if the model is optimized to fully use the GPU in the first place.

1

DevarshTare OP t1_j9ngofc wrote

I've seen the same across multiple threads now, the VRAM does make a difference in being able to run a model or having to optimize it. This has been really helpful, thanks a lot guys!

1