Sepic2

Sepic2 t1_j0fzir8 wrote

Maybe a dumb question but i don't see how this method enables learning in any way:

- The (first) forward part calculates loss/goodness, and then you need backpropogation to change weights of the network according to derivatives of the loss/goodness. How does the network learn if weights are not changed and you only calculate goodness?

The paper says: "The positive pass operates on real data and adjusts the weights to increase the goodness in every hidden layer. The negative pass operates on "negative data" and adjusts the weights to decrease the goodness in every hidden layer"

- Could it be that the in the first "forward", you actually do both forward and backward-prop, and the name just sounds fancy with the second "forward" trying to implement contrastive learning in a clever way?

1