Viewing a single comment thread. View all comments

yanivbl t1_iwgb683 wrote

They had more GPUs, training in parallel. Not sure about cifar10 but I read the number for ADM with imagenet is ~1000 days for a single V100.

4

ButterscotchLost421 OP t1_iwggwv8 wrote

Thank you! What do you mean by ADM? Adam?

When training in parallel, which technique did they use? Calculate the gradient of a batch of size `N` on each of the devices and then synchronizing all the different devices to get the mean gradient?

1

yanivbl t1_iwgjnht wrote

No, not Adam, I was referring to the model from the diffusion beats Gans paper.

I never trained such model, just read it. But yeah it's most likely what you said (a.k.a data parallelism)

2