Submitted by zveroboy152 t3_zwtgqw in MachineLearning

Greetings!

​

Recently I was asked about a budget AI / ML workload, and decided to test it against some of my own lab GPUs.

​

I'll be adding more tests, and benchmarks over time, but below is a link to my website where I covered it. As well as the code I wrote to benchmark them.

​

Hopefully this helps someone out there. :-)

​

https://www.zb-c.tech/2022/12/26/pytorch-drag-race-tesla-k80-performance/

11

Comments

You must log in or register to comment.

Tom_Neverwinter t1_j1x5gyp wrote

Bought a few Tesla m40s now I need a motherboard with enough gpu slots. 1x slots have too much of a bottleneck whoops.

2

learn-deeply t1_j1xjm5p wrote

This benchmark is not representative of real models, making the comparison invalid. The model has ~5,000 parameters, while the smallest resnet (18) has 10 million parameters. You're essentially just comparing the overhead of PyTorch and CUDA, which isn't saying anything about the actual performance of the different GPUs.

19

yaosio t1_j20mi1e wrote

Your conclusion is that the Tesla K80 is a great value, but your benchmark doesn't show that. It shows that the Tesla K80 12 GB is slower than the RTX 3070 TI (with unknown vram) on one synthetic benchmark. You don't provide performance per dollar. You also say it scales up well across multiple cards but don't show that.

3

zveroboy152 OP t1_j20ult3 wrote

You're right, I didn't include that data. I wasn't sure how to calculate it. Ill work on updating the article to reflect that data.

I appreciate the constructive criticism, it really helps. :-)

2