Submitted by nature_and_carnatic t3_zyp602 in MachineLearning

I worked on explainable ai for healthcare related project. Nothing fancier to be honest just few already existing xai models . But i would like to continue research in the field of interpretability. Does anyone have any idea on how to proceed further? And if someone has any ideas in mind please feel free to share so that it will be useful to others also . Thanks

16

Comments

You must log in or register to comment.

zyl1024 t1_j27fudl wrote

If you've tried some existing xai techniques, I bet that you are left deeply unsatisfied by most of them. I work on interpretability, I am convinced that most (maybe all?) methods don't work right now, but I am also convinced that XAI in general would work eventually. So I would recommend start from the basics: what do you want out of XAI, and in what ways are they failing? Trying to answer these two questions would surface a lot of research questions.

8

nature_and_carnatic OP t1_j27hfjn wrote

Yeah sadly most of the already existing techniques like LIME, shap etc.. didn't give good results .

> What do you want out of xai? For now i need to explain the model that we are using but thats leading no where in 3 months of time:( . But what i am thinking was either improvise an existing xai technique . Honestly this was the only things that come in mind regarding research in xai . But as you are already working in xai i am open to your recommendations:) . I am an undergrad so have some time to work on those before i start my phd (I mean thinking of doing phd not yet decided)

>In what ways they are failing? Ok honestly the dataset is some ultrasound thyroid images which itself cant be interpretable by an average human . And some how we asked radiologist to segment the cancerous portion and lime, shap aren't matching the portions segmented by radiologist.

Our black box model consists of some Pass band descrete cosine tranform feature vectors and then use some normal svm to classify it. And i haven't found many techniques in xai where black box model isnt NN-based .

I am open to any suggestions from your side and any lead will be highly appreciated

2

nibbels t1_j2817hu wrote

Imo, read work from Been Kim and her affiliates. They study problems in post-hoc xai methods. A recent-ish paper shows these methods don't always reveal spurious correlations. You would also probably do well to study flaws in the models themselves (underspecification, spurious correlations, etc).

You can also look into "inherently" interpretable models. These are models that, for whatever reason, lend themselves to their own explanations. Attention models are an example. And Hinton's new "forward forward" method seems more intrinsically interpretable. Disclaimer: attention weights have their own issues, and are not always accurate.

If you're thinking of developing your own xai methods, I should warn you: the field is flooded with "new" methods that are basically just tweaks on current methods. Instead, if you do want to work on new methods, I recommend getting a very good sense of where these methods fail (search Kim's work, and if you want more specifics, I can provide links), then testing new methods in a very robust and rigorous way.

For a better understanding of the field, check out this paper: https://www.researchgate.net/publication/358123243_Explainable_Deep_Learning_A_Field_Guide_for_the_Uninitiated

Christoph Molnar also has some really good explanations of the methodologies.

4

trnka t1_j285nir wrote

Nice to see other people doing Xai in healthcare! Here are some of the ideas I would've loved to explore, no clue if they'd work out but they might be fun:

  • Extract features from dermatology images to be used in interpretable models -- Things like whether it's a bullseye, raised, etc. Those features could be used in an interpretable diagnosis model, potentially even a rule-based one made by doctors. Then it could be used to generate part of the clinical note. One of my coworkers did a hackathon project on it and I think it's got potential. I have no idea how to design it to be robust against errors in the feature extractor though, like if it hallucinates a bullseye.
  • I'm curious about using NLG models for explanation. Maybe one of the GPTs on Huggingface with some prompt work could do it -- something like "We're going to explain the choice of diagnosis in medical cases. Features of patient 1: ... Explanation in favor of sinusitis: ...., Features of patient N: ... Explanation in favor of covid:" It wouldn't be a real explanation but it might be more usable than existing options, especially if there's a lot of textual data from the patient

There was an interesting panel debate about Xai at ML4H this year. I don't think the recordings are online but the papers are online at least. Mihaela van der Shaar brought up some interesting work too, such as learning interpretable equations for healthcare.

3

Mental-Swordfish7129 t1_j28a9fj wrote

I have an AI model I've been working on for some time now which I believe may be much more interpretable than many recent developments. It utilizes Bayesian model evidence as a core function anyway, so it has already "prepared" evidence and an explanation of sorts for why it "believes" what it "believes". This has made for an interesting development process as I can observe its reasoning evolve. I could elaborate if you're interested.

3

[deleted] t1_j27q1ay wrote

[removed]

2

nature_and_carnatic OP t1_j27qb2o wrote

Whats odyssey? Is it some kind of technique ? I am hearing it for the first time

2

[deleted] t1_j27rhjy wrote

[removed]

5

nature_and_carnatic OP t1_j27rtz2 wrote

Username checks out -_-

4

tamannahspeaks t1_j29jwje wrote

How come it isnt downvoted..

1