Submitted by SaltyStackSmasher t3_11euzja in MachineLearning
RaeudigerRaffi t1_jahpbod wrote
Reply to comment by RaeudigerRaffi in [D] backprop through beam sampling ? by SaltyStackSmasher
To add to this I thought a bit about it and technically in PyTorch, this should be possible to do with some trickery with custom autograd functions. You can probably sample with Gumbel Softmax and return the argmax. In the custom backward you can just skip the argmax part and backprop as if the Gumbel Softmax output has been returned and not the argmax on the Gumbel Softmax.
Viewing a single comment thread. View all comments