Comments

You must log in or register to comment.

Weary-Marionberry-15 t1_j3li7ao wrote

I don’t think this looks bad at all. I would probably push for A100 80gb gpu’s instead and the latest gen 64-core threadripper.

10

joossss OP t1_j3lj1su wrote

Thanks! The newest Threadrippers are still based on Zen 3. So, they don't support AVX512. Would definitely like to go with A100s, but we don't have the budget for that.

6

TrueBirch t1_j3mc7ua wrote

What made you decide to run an on-prem server instead of going to the cloud? I'm a data science manager and I'm currently looking at our options. I like self-hosting for most things, but I'm up in the air about training deep learning models.

10

deephugs t1_j3mt7re wrote

Cloud is almost always better imo. At the small scale you can prototype quicker and spend less time messing with hardware by using cloud services. Once you actually need to scale your product then using a cloud solution makes it really easy. The "but its cheaper" argument gets less and less valid every year, and it often doesn't account for the time and effort spent setting up a local cluster.

5

rlvsdlvsml t1_j3n2it2 wrote

If u use ray u can setup a gpu cluster in less than 30 min

2

deephugs t1_j3n3qwj wrote

I think Ray is great! But Ray will not click your GPUs into a motherboard, install linux on all the machines, setup nvidia-docker, power cycle if there are issues, periodically clear up space on hdds, etc. Its the non-software part of cluster management that ends up being the most annoying and time consuming.

4

rlvsdlvsml t1_j3nd87h wrote

I have always felt like the network/security and integration with internal it systems was worse than the physical maintenance. Like people should expect that they have to invest time into integrating into a on-prem data center environment or physical maintenance stuff. I think small teams are benefited by a small gpu cluster with a fixed budget over large cloud gpu training costs. Mid-large companies do better with cloud than on-prem bc they can have better separation of environments but they cost more.

3

joossss OP t1_j3q9m8r wrote

The main reason for going to the cloud for us is that we are a research institution so, our funding is project-based meaning we have to use the funding in the allotted time and the second reason is that we already have the GPUs so the time it takes to pay itself off is faster.

2

Cosmic_peach94 t1_j3nrjte wrote

As a recommendation I learned from a past job, use slurm or a similar program to make turns on the use of the gpu so you don’t end up dropping each other’s models

3

joossss OP t1_j3q9qe0 wrote

Thanks for the info! Was thinking on how to do that.

2

learn-deeply t1_j3nx99l wrote

Are you looking to do distributed training across machines? Otherwise the NIC seems complete overkill.

2

joossss OP t1_j3q9uip wrote

Only this server is planned. I just went with the recommendation from NVIDIA's website, which stated 100 Gbps per A100, but I guess it makes more sense now that I think of distributed training. What NIC speed seems enough in that case?

1

learn-deeply t1_j3qdgsl wrote

10Gbps is more than sufficient, data loading from the internet is not the bottleneck. Most likely you'll have the data already stored on the machine itself. Btw why did you remove the post?

2

joossss OP t1_j3qnc04 wrote

Yeah true and thanks :)

I did not remove it. Was removed by the moderators for some reason.

1