The proce of 4 rtx4090 is very high. If you plan on spending that much money I suppose it is for business or research. It could make sense to invest in a server grade processor and motherboard with support to many more pcie lanes. Also, servers often use multiple power supplies for redundancy, but in this case they can be helpful for managing the wattage of 4 cards. In my experience limiting the max power does not impact training time in a dramatic way, so I would do that especially if you are planning to air cool them (which is probably the best thing for an offsite server)
I would seriously consider also getting a lot of RAM and a beast of a cpu with a lot of cores if you’re going multiple gpus. A lot of machine learning applications (especially less optimised ones) require stuff to be done on the cpu in parallel with gpu and you don’t want to get bottlenecked
xolotl96 t1_j9izg4o wrote
The proce of 4 rtx4090 is very high. If you plan on spending that much money I suppose it is for business or research. It could make sense to invest in a server grade processor and motherboard with support to many more pcie lanes. Also, servers often use multiple power supplies for redundancy, but in this case they can be helpful for managing the wattage of 4 cards. In my experience limiting the max power does not impact training time in a dramatic way, so I would do that especially if you are planning to air cool them (which is probably the best thing for an offsite server)