Submitted by Nerveregenerator t3_z0msvy in deeplearning
Star-Bandit t1_ix7anbd wrote
Reply to comment by Nerveregenerator in GPU QUESTION by Nerveregenerator
No, each K80 is about equal to 2 1080ti, if you look at the cards they each have two chip sets and about 12Gb of RAM to each chip, 24Gb total vram per card. But the issue is they get hot, when running a training model on them it can sit around 70°c. But it's nice to be able to assign each chip set to different tasking.
Star-Bandit t1_ix7avv6 wrote
Actually after going back over the data from the numbers perspective regarding the two cards (bandwidth, clock speed etc,) the 1080 Ti certainly might have the upper hand, I'd have to run some benchmarks myself
Dexamph t1_ix7onhf wrote
I think you way overestimated K80 performance when my 4GB GTX 960 back in the day could trade blows with a K40, which was a bit more than half a K80. In a straight memory bandwidth fight, like Transformer model training, the 1080Ti is going to win hands down even if you have perfect scaling acorss both GPUs on the K80 and that's assuming it doesn't get hamstrung by the ancient Kepler architecture in any way at all.
Viewing a single comment thread. View all comments