Submitted by xyrlor t3_ymoqah in deeplearning
Hamster729 t1_iv7swx8 wrote
Absolutely. In fact, you typically get more DL performance per $USD with AMD GPUs, than with NVIDIA.
However, there are caveats:
- The primary target scenario for ROCm is Linux + docker container + gfx9 server SKUs (Radeon Instinct MIxxx). The further you move from this optimal target, the more uncertain things become. You can install the whole thing directly into your Ubuntu system, or, if you really want to waste lots of time, to compile everything from source, but it is best to install just the kernel-mode driver, and then do "docker run --privileged" to pull a complete VM with every package already in place. I am not sure what the situation is with Windows support. Support of consumer grade GPUs usually comes with some delay. E.g. Navi 21 support was only "officially" added last winter. The new chips announced last week may not be officially supported for months after they hit the shelves.
- You occasionally run into third party packages that expect CUDA and only CUDA. I just had to go through the process of hacking pytorch3d (the visualization package from FB) because it had issues with it.
Viewing a single comment thread. View all comments