Submitted by Old_Scallion2173 t3_115wu59 in MachineLearning

Hello, community.

Description:

I am planning to create a detection model using YOLO v8 to detect leukemia cells in a blood sample. I started learning about deep learning two months ago and I am eager to try out image segmentation on my present dataset instead of bounding boxes, as the cells are closely bunched together. I need advice on whether I should use bounding boxes or instance segmentation, considering my dataset and expected results.

Context:

Leukemia is caused by an abundance of different types of naive or altered white blood cells in the body, which overwhelm the bloodstream and inhibit the proper functioning of normal white blood cells. There are three classes in my dataset: lymphoblasts, promyelocytes, and neutrophils, and I need to be able to detect these cells.

Expected Results:

As this is a medical domain, false positives are acceptable, but false negatives are not.

About dataset:

lymphoblast sample image

sample image for promyelocytes

sample image for neutrophils

sample test image

lymphoblasts(101 images)

promyelocytes(91 images)

neutrophils(133 images)

more context for your reading:

An over abundance of lymphoblasts results in acute lymphoblastic leukemia (ALL), while acute pomyelocytic leukemia (APLML/APL) is caused by an abnormal accumulation of promyelocytes. neutrophils do not cause leukemia.

4

Comments

You must log in or register to comment.

blackhole077 t1_j93xnn8 wrote

Since I'm on a mobile device I'll write a shorter answer that hopefully gives you some insight.

From what I've understood of your question, you're wanting to know if bounding boxes would perform worse due to the proximity of cells you wish to detect.

Both methods may struggle with the cells being in close proximity, and instance segmentation may perform better in that regard. However I will reframe the question slightly.

First, there's a reason that object detection and instance segmentation are different methods. The latter is preferred in situations where you need to know the pixels that are considered to be the detected class, which I think is not what you're aiming for.

Second, the annotation process is, of course, more labor intensive when you want segmentation masks. Luckily you should be able to generate bounding boxes from masks easily, but keep it in mind if you're on a tighter schedule.

If you have additional questions please let me know. I wish you luck in your endeavor.

Hope this helps

5

Old_Scallion2173 OP t1_j945zye wrote

thankyou for taking the time to answer my question. after reading your answer I've come to the conclusion that image segmentation can improve my model, but I am not using it for it's intended purpose, and also the fact that I have a lot of reading to do :). I do wish to ask tho, do you think I should instead focus on fine tuning my model and getting more dataset to improve the model? Maybe I'm getting too optimistic about instance segmentation.

1

blackhole077 t1_j96g3wh wrote

> I do wish to ask tho, do you think I should instead focus on fine tuning my model and getting more dataset to improve the model? Maybe I'm getting too optimistic about instance segmentation.

I'm glad I've been of assistance. As for your follow-up question, it generally never hurts to have more data to work with and, of course, fine-tuning your existing models (if you have any at this time) can help as well.

I would say though, that you should determine what metrics you're wanting to see from your model first. As you mentioned earlier, you want to ensure that false negatives are as low as possible.

Naturally this translates to maximizing recall, which generally comes at the expense of precision. Thus, the question could be reframed as: "At X% recall how precise will the model be?" and "What parameters to the model can I tune to influence the precision at that recall?"

However, how false positives (FP) and false negatives (FN) and, by proxy, Precision and Recall, are defined is not as straightforward in object detection as it is in image classification.

Since I'm currently dealing with this problem, albeit in a different area altogether, here's a paper that I found useful for getting interpretable metrics:

https://arxiv.org/abs/2008.08115

This paper and its Github repository basically work on breaking down what exactly your model struggles with, as well as showing the FP/FN rates given your dataset. It might be a little unwieldy since it's a tool that has been somewhat neglected by its creator, but it's certainly worth looking into.

Hope this helps.

2

Morteriag t1_j951p1n wrote

I would use instance segmentation, it will feed the network more information and increase the chance if success. The output is also easier to interpret to guide data selection in the next iteration. The annotation process is more labour intensive, but using good tools/annotation platform go a long way to speed things up. Once your model is good enough, it is mostly a matter of correcting small mistakes.

3

Old_Scallion2173 OP t1_j95a3nm wrote

I see, currently I'm using roboflow as it is convenient and does have a polygonal labelling tool. By the way, Do you think I should do transfer learning and/or k-fold cross validation too since my dataset is small (325 images)?

1

Morteriag t1_j95aodr wrote

That size would do well as a PoC, not much more, and you should be able to annotate all the data within a day or two. Automation does not make that much sense at this scale. I love Roboflow for bounding boxes, but LabelBox has superior tools for segmentation. Sure, with this small data set you can use cross validation, although a hold out test set is also preferable. I would almost consider hand-picking the test set at this scale to make sure you get a sense of how it performa on challenging examples. What is the pixel size of your images? I know microscopy/histology images typically can cover large areas and one image could in fact be considered a mosaic of many “normal” sized images.

2

Morteriag t1_j95b954 wrote

Last I checked Roboflow only had point-to-point vector masks for segmentation. In my experience that makes getting quality annotations a pain. In Labelbox, you can also hold in the mouse button. Hasty.ai focus on auto annotations, and by the look of the image you posted, it might be a good fit for your usecase.

1

FHIR_HL7_Integrator t1_j943lrj wrote

Would definitely be interested to hear of your progress in future. Imaging isn't my area, but medical is. Would appreciate updates as progress continues

2

Old_Scallion2173 OP t1_j946j9i wrote

most definitely! will try to find a way to update progress. Also I have a feeling that I'm probably using a lot of wrong terms in my attempt to describe different types of leukmemia lol

2

PHEEEEELLLLLEEEEP t1_j96ezic wrote

You could try existing cell segmentation algos like stardist or cell pose

1