JH4mmer

JH4mmer t1_ixhvl5k wrote

You may be making a different set of assumptions about the training data than I am, so let me clarify a bit. :-)

If you start with images that truly do contain just one class, the addition of a new class label wouldn't change anything. Your level vector for the exisiting images would migrate from [1, 0] to [1, 0, 0], something that can be done automatically without additional human intervention. Your new images (used for training the new class) would have a label of [0, 0, 1].

If, however, your images do already contain more than one possible class (which is far and away much more common in real-world data), the original labels would be already invalid, since the original labeling assumed that there was only one correct answer. Those images that do contain multiple classes would have to be relabeled, yes.

The process I'm describing is a mechanical one that doesn't involve a separate knowledge distillation step. It's a technique my team has used successfully in industrial retail applications, where the number of classes is truly an unknown, and we have to add or remove classes from our trained models frequently.

2

JH4mmer t1_ixhfmew wrote

My recommendation would be to drop the "other" class entirely. Thats a classic mistake that I've seen juniors do many times, and it doesnt really work out like you expect it to in the real world. The main problem with that approach is that a catch-all class like that has infinite variance (theoretically requiring infinite training data). Plus your labels often become massively unbalanced relative to the positive classes.

Instead, think of your model as having multiple tails, one for each class you actually care about (e.g. what is the probability that a dog is in this image?, what is the probability that a cat is in this image? Etc.) Each output has its own logistical activation that's independent of the other classes. Where before you might have had a softmax layer that returned [0.2, 0.3, 0.5] for (dog, cat, other), you might now have [0.8, 0.7] for (dog, cat). The labels will not sum to 1 because they are independent of one another.

Note that this is the approach you would take for multi-class classification as well, so you might want to read up on that pattern for more information.

Lastly, if you have a trained model in this format, adding a new class is very easy. The first N layers of the network are shared for all classes and so are already pretrained for you. You would add a new tail to the model using whichever weight initialization strategy you care about, add some samples of the new class, and then do some fine tuning on the new tail layer(s) to make sure that your network can effectively detect the new class.

Of course there are many variations to this training approach. You may choose to also do some fine tuning of the entire network with a dataset that includes samples of the new class, but hopefully you get the idea.

I hope this points you in the right direction! Cheers.

2