Submitted by zxkj t3_1126g64 in MachineLearning

Wondering if there’s a term for this.

I’m training NNs for a scenario that works best with a small batch size, there are therefore many batches.

There are a couple particular samples that are VERY important. Let’s say 3 important samples out of thousands I train to.

I found end application is best when I include these important samples, repeated, in every batch. This is opposed to simply giving the samples a large weight, because the large weight doesn’t matter after looping through many batches in an epoch.

So the NN learns the other less important stuff while being forced to remain in good agreement with the important samples.

Does this technique have a name?

EDIT: In case anyone is curious, these are physics informed NNs and the important samples are equilibrium mechanical structures. The NN therefore learns what equilibrium is, with everything else being small deviations from equilibrium.

18

Comments

You must log in or register to comment.

BossOfTheGame t1_j8ikmcj wrote

Because you have a small batch size, my feeling is that you probably want a very small dropout rate on the important items, if only to decrease the chance the network overfits to them. Maybe 1 / 100 batches, excludes the important item and the rest include it. But perhaps it doesn't matter.

3

nerdimite t1_j8is883 wrote

This seems somewhat similar to hard example mining except that you already know which ones are hard here.

1

Red-Portal t1_j8ke7vj wrote

It's literally called importance sampling in the SGD literature. You normally have to downweigh the "important samples" to counter the fact that you're sampling them more often. Whether this practice actually accelerates convergence has been an important question in SGD until very recently. Check this paper.

3

bushrod t1_j8m68xt wrote

This technique is similar to data augmentation, but with a specific focus on important samples. There may not be a specific name for this technique, but it could be considered a form of "strategic oversampling" or "strategic repetition" of important samples. By repeating these important samples in every batch, you are increasing their impact on the training process and potentially helping the neural network to converge to a better solution that takes these samples into account.

It's worth noting that this technique may not always be appropriate or necessary, and it could potentially lead to overfitting if not used carefully. However, in cases where there are a small number of important samples that have a disproportionate impact on the end application, repeating them in every batch can be a useful approach to ensure that the neural network learns to incorporate their information effectively.

:-P

1