Viewing a single comment thread. View all comments

ButterscotchLost421 OP t1_iwggwv8 wrote

Thank you! What do you mean by ADM? Adam?

When training in parallel, which technique did they use? Calculate the gradient of a batch of size `N` on each of the devices and then synchronizing all the different devices to get the mean gradient?

1

yanivbl t1_iwgjnht wrote

No, not Adam, I was referring to the model from the diffusion beats Gans paper.

I never trained such model, just read it. But yeah it's most likely what you said (a.k.a data parallelism)

2