Submitted by mosalreddit t3_118iyke in deeplearning
suflaj t1_j9jwfc9 wrote
Reply to comment by shawon-ashraf-93 in Bummer: nVidia stopping support for multi-gpu peer to peer access with 4090s by mosalreddit
OK, and now an actual argument?
Or are you one of those people who unironically believe NVLink enabled memory pooling or things like that?
shawon-ashraf-93 t1_j9jwos2 wrote
NVLink doesn’t have to be gaming specific. Anything that requires high band low latency data transfer will benefit from it. There’s no point of 24gigs of vram if you can’t transfer data between gpus faster in a multi gpu setting.
suflaj t1_j9k3bqn wrote
The 300 GB/s, which was its theoretical upper limit of it in a MULTI-GPU workload did not show a significant difference in benchmarks. Please do not gaslight people into believing it did.
shawon-ashraf-93 t1_j9k3j42 wrote
Post the benchmarks :) I’m not the one gaslighting here.
suflaj t1_j9khr8s wrote
The burden on proof is on you, since you initially claimed that there will be benefits.
shawon-ashraf-93 t1_j9khu9r wrote
Okay. Have a nice day good sir. :)
Viewing a single comment thread. View all comments