Viewing a single comment thread. View all comments

Appropriate_Ant_4629 t1_j75sc61 wrote

Note that some models are extremely RAM intensive; while others aren't.

A common issue you may run into are errors like RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch), and it can be pretty tricky to refactor models to work with less RAM than they expect (see examples in that link).

1

ilolus t1_j763eyt wrote

That's VRAM, not RAM. It's rare to have a problem with RAM limits nowadays.

6