Comments

You must log in or register to comment.

thelibrarian101 t1_j9ivql0 wrote

PCIe extenders always worry me, they tend to catch fire very easilly. There is a large amount of current flowing through them after all

1

suflaj t1_j9iwxuj wrote

You would have to limit the power to 250W. It will overheat without an open case. PCI-E 3 x8 means you are cutting the cards bandwidth in half.

Overall a terrible idea.

1

xolotl96 t1_j9izg4o wrote

The proce of 4 rtx4090 is very high. If you plan on spending that much money I suppose it is for business or research. It could make sense to invest in a server grade processor and motherboard with support to many more pcie lanes. Also, servers often use multiple power supplies for redundancy, but in this case they can be helpful for managing the wattage of 4 cards. In my experience limiting the max power does not impact training time in a dramatic way, so I would do that especially if you are planning to air cool them (which is probably the best thing for an offsite server)

4

jcoffi t1_j9jcsha wrote

P2P is disabled on the 4090s so if you needed that expect degraded performance

−1

jakecoolguy t1_j9jd1hg wrote

I would seriously consider also getting a lot of RAM and a beast of a cpu with a lot of cores if you’re going multiple gpus. A lot of machine learning applications (especially less optimised ones) require stuff to be done on the cpu in parallel with gpu and you don’t want to get bottlenecked

3

SpareAnywhere8364 t1_j9jvayc wrote

Unless you have a giant case and very powerful.cooling.those.csrds are gonna cook eachother and throttle like a motherfucker.

2

lovehopemisery t1_j9k4ql9 wrote

Have you considered the cost of training/ running your models on the cloud compared to this? It seems like a big investment to rush into

1