Comments

You must log in or register to comment.

suflaj t1_j73xnvw wrote

A CUDA-capable GPU with compute capability generally over 3.0 and an OS that supports the said nvidia GPU and CUDA drivers.

9

Some-Assistance-7812 t1_j764ulk wrote

You need raw power!! In terms of CPU, number of cores matter more than CPU speed In terms of RAM, more RAM capacity is preferably. (Generally more than twice the VRAM of your GPU) In terms of GPU, VRAM is most essential!

2

Appropriate_Ant_4629 t1_j75sc61 wrote

Note that some models are extremely RAM intensive; while others aren't.

A common issue you may run into are errors like RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch), and it can be pretty tricky to refactor models to work with less RAM than they expect (see examples in that link).

1

ilolus t1_j763eyt wrote

That's VRAM, not RAM. It's rare to have a problem with RAM limits nowadays.

6

foxracing4500 t1_j77rdn3 wrote

Depends on your budget? How much are you looking to spend?

1

harry-hippie-de t1_j78b01o wrote

There's a difference in training and inference. Hardware requirements for training are larger.

1