Submitted by AmalgamDragon t3_yf73ll in MachineLearning
IntelArtiGen t1_iu24fuh wrote
I'm not sure how it would really learn something from the input if you don't define a more useful task. How would this model penalize a "collapse" situation where both models always predict 0 for example or any random value?
Contrastive learning algorithms try to build two different embeddings for different parts of the same input, penalize collapse, and train a model to make the two embeddings as close as possible, knowing they come from the same input, even if they are from different parts of that input. It looks a bit like what you said but I don't know an implementation that is like the one you said.
AmalgamDragon OP t1_iu25s20 wrote
> I'm not sure how it would really learn something from the input if you don't define a more useful task. How would this model penalize a "collapse" situation where both models always predict 0 for example or any random value?
Yeah, it may not work well. I haven't been able to track down if this is something that has been tried and been found wanting or not.
Viewing a single comment thread. View all comments