Submitted by mosalreddit t3_118iyke in deeplearning
Comments
CKtalon t1_j9kfuhx wrote
Funny how the RTX 6000 ADA doesn’t have NVLink as well
suflaj t1_j9iwkp8 wrote
They removed NVLink to cut down on costs on a feature that is supported by only a handful of games in existence and otherwise useless.
shawon-ashraf-93 t1_j9jw9vv wrote
Don’t embarrass yourself over things you don’t know.
suflaj t1_j9jwfc9 wrote
OK, and now an actual argument?
Or are you one of those people who unironically believe NVLink enabled memory pooling or things like that?
shawon-ashraf-93 t1_j9jwos2 wrote
NVLink doesn’t have to be gaming specific. Anything that requires high band low latency data transfer will benefit from it. There’s no point of 24gigs of vram if you can’t transfer data between gpus faster in a multi gpu setting.
suflaj t1_j9k3bqn wrote
The 300 GB/s, which was its theoretical upper limit of it in a MULTI-GPU workload did not show a significant difference in benchmarks. Please do not gaslight people into believing it did.
shawon-ashraf-93 t1_j9k3j42 wrote
Post the benchmarks :) I’m not the one gaslighting here.
suflaj t1_j9khr8s wrote
The burden on proof is on you, since you initially claimed that there will be benefits.
shawon-ashraf-93 t1_j9khu9r wrote
Okay. Have a nice day good sir. :)
[deleted] t1_j9i8tj7 wrote
[deleted]
buzzz_buzzz_buzzz t1_j9hjtqx wrote
Not surprising given that they removed NVLink to force multi-gpu users to purchase A6000s