Submitted by _underlines_ t3_zstequ in MachineLearning
SirReal14 t1_j1axbn1 wrote
Another option is to work with/contribute to a distributed implementation of large language models. The Petals project is running BLOOM over a decentralized network of small workers (min 8GB VRAM requirement)
Soc13In t1_j1bdyxw wrote
Can Radeon cards work or is it Nvidia only?
kkchangisin t1_j1bjz3f wrote
CUDA only
Soc13In t1_j1byszb wrote
expected as much. thanks for the info though.
SirReal14 t1_j1ben2b wrote
Not sure, give it a try and find out!
[deleted] t1_j1dp0ph wrote
[removed]
Viewing a single comment thread. View all comments