Submitted by AJ521432 t3_yroqlt in MachineLearning

I have to work on a chest X-ray dataset where the objective is to perform object localization of abnormalities present in the chest. The only problem is I do not have the annotations for the images. However, I do have the radiology report for each chest image. Can you suggest how can I proceed further with my project? Thank you

2

Comments

You must log in or register to comment.

A1-Delta t1_ivx11yf wrote

Is your problem that you don’t know how to interpret reports and identify the abnormality in the scan?

2

AJ521432 OP t1_ivxc22j wrote

Yes, I do not know how to interpret the reports.

1

A1-Delta t1_ivy8pty wrote

Like others have said, you’ll likely need a radiologist to annotate for you.

You could try a sort of self-supervised training approach, but it is going to require a lot more data and frankly probably isn’t going to create very good models in this context.

2

AJ521432 OP t1_ivzfw4y wrote

Thank you for your response u/A1-Delta. I will definitely see what I can do.

1

MisterManuscript t1_ivxatob wrote

You'll need to consult a doctor specialised in respiratory medicine. Pretty sure no one in ML can teach you how annotate medical images unless they are trained in medicine.

2

sanjuromack t1_ivy3p5s wrote

Yeah, you definitely need subject matter experts. Is this a project for work, what kind of funding and resources do you have access to?

1

AJ521432 OP t1_ivzg6rk wrote

It is for a master's thesis. As I understand it, it is very hard to get radiologists or health physicians to do the labeling unless maybe I have access to some sort of funding. Which I don't at the moment.

1

farmingvillein t1_ixncbjn wrote

Check out https://twitter.com/cxbln/status/1595652302123454464:

> 🎉Introducing RoentGen, a generative vision-language foundation model based on #StableDiffusion, fine-tuned on a large chest x-ray and radiology report dataset, and controllable through text prompts!

(Not your full problem, but you may find it helpful!)

More generally, you could probably use the same image-to-text techniques that get used to validate a stablediffusion model.

Or for a really quick-and-dirty solution, you could try using a model like theirs to generate training data (image, text pairs) and train an image => txt model (which they do a variant of in the paper).

1