Viewing a single comment thread. View all comments

deepForward t1_jdvdq24 wrote

If you already have some labeled chairs, train a first model with that, then run it on images with chairs and no label. Have a second pass with your enriched dataset, and eventually a third, etc.

You can bootstrap the labelling that way. It should help you label a decent amount of chairs, and you can then label manually the remaining chairs.

2

AI-without-data OP t1_jdvhnjr wrote

Thank you. But I don't understand it clearly.

Do people train the model in that way as well? In the COCO dataset, some images contain objects that are not labeled but are listed in the classes.

If people follow your suggested method for training the model, they would need to first filter out images with perfectly labeled objects (no missed labels) from the COCO dataset and use that filtered data to train the model. Then they would need to run the model on the remaining data to obtain labels for objects that are not included in the dataset, and update the entire dataset accordingly. Is this correct?

1

qphyml t1_jdz7ndt wrote

I think you can do it both ways (with or without filtering) and compare. Just speculating now, but the filtering could potentially affect the performance on the other classes (since you change the model’s training path for those classes ). But my guess is that that should not be a big issue, so I’d probably go about it the way you described if I had to pick one strategy.

1

AI-without-data OP t1_jdzb8fd wrote

Ok I appreciate! I'm trying to filter out images.

2

qphyml t1_jdzg13b wrote

Good luck! Would be great to hear how it goes and which insights you get!

1

deepForward t1_je0ktqx wrote

Try the easy way first :

Build a model that only learns chairs, with all labeled chairs you have and ignore anything else at first.

Try also image data augmentations and see if it helps.

You are not looking at having the best score, actually you dont care about your score as long as you can label new chairs.

You mostly want to tune the model so that you don't have false positives (and introduce noise in your labels). False negatives are OK, and will occur if you tune the model so that FP are zero. You can tune for instance the threshold on a confidence score or class probability (check the model you're using).

You can also build a basic image validation tool with jupyter notebook widgets, steamlit, or your favorite tool, if you want to validate quickly by hand that they are no false positives. It's a very good exercise.

Good luck !

1

AI-without-data OP t1_je5078l wrote

I see. I think changing the threshold of confidence score and probaility is good idea. I should try the ways step by step. Thank you!

1