Viewing a single comment thread. View all comments

blackhole077 t1_j96g3wh wrote

> I do wish to ask tho, do you think I should instead focus on fine tuning my model and getting more dataset to improve the model? Maybe I'm getting too optimistic about instance segmentation.

I'm glad I've been of assistance. As for your follow-up question, it generally never hurts to have more data to work with and, of course, fine-tuning your existing models (if you have any at this time) can help as well.

I would say though, that you should determine what metrics you're wanting to see from your model first. As you mentioned earlier, you want to ensure that false negatives are as low as possible.

Naturally this translates to maximizing recall, which generally comes at the expense of precision. Thus, the question could be reframed as: "At X% recall how precise will the model be?" and "What parameters to the model can I tune to influence the precision at that recall?"

However, how false positives (FP) and false negatives (FN) and, by proxy, Precision and Recall, are defined is not as straightforward in object detection as it is in image classification.

Since I'm currently dealing with this problem, albeit in a different area altogether, here's a paper that I found useful for getting interpretable metrics:

https://arxiv.org/abs/2008.08115

This paper and its Github repository basically work on breaking down what exactly your model struggles with, as well as showing the FP/FN rates given your dataset. It might be a little unwieldy since it's a tool that has been somewhat neglected by its creator, but it's certainly worth looking into.

Hope this helps.

2