Submitted by AI-without-data t3_123gy5h in deeplearning
AI-without-data OP t1_jdvhnjr wrote
Reply to comment by deepForward in Training only Labelled Bbox for Object Detection. by AI-without-data
Thank you. But I don't understand it clearly.
Do people train the model in that way as well? In the COCO dataset, some images contain objects that are not labeled but are listed in the classes.
If people follow your suggested method for training the model, they would need to first filter out images with perfectly labeled objects (no missed labels) from the COCO dataset and use that filtered data to train the model. Then they would need to run the model on the remaining data to obtain labels for objects that are not included in the dataset, and update the entire dataset accordingly. Is this correct?
[deleted] t1_jdvn3rn wrote
[deleted]
qphyml t1_jdz7ndt wrote
I think you can do it both ways (with or without filtering) and compare. Just speculating now, but the filtering could potentially affect the performance on the other classes (since you change the model’s training path for those classes ). But my guess is that that should not be a big issue, so I’d probably go about it the way you described if I had to pick one strategy.
AI-without-data OP t1_jdzb8fd wrote
Ok I appreciate! I'm trying to filter out images.
qphyml t1_jdzg13b wrote
Good luck! Would be great to hear how it goes and which insights you get!
deepForward t1_je0ktqx wrote
Try the easy way first :
Build a model that only learns chairs, with all labeled chairs you have and ignore anything else at first.
Try also image data augmentations and see if it helps.
You are not looking at having the best score, actually you dont care about your score as long as you can label new chairs.
You mostly want to tune the model so that you don't have false positives (and introduce noise in your labels). False negatives are OK, and will occur if you tune the model so that FP are zero. You can tune for instance the threshold on a confidence score or class probability (check the model you're using).
You can also build a basic image validation tool with jupyter notebook widgets, steamlit, or your favorite tool, if you want to validate quickly by hand that they are no false positives. It's a very good exercise.
Good luck !
AI-without-data OP t1_je5078l wrote
I see. I think changing the threshold of confidence score and probaility is good idea. I should try the ways step by step. Thank you!
Viewing a single comment thread. View all comments