Viewing a single comment thread. View all comments

trnka t1_j285nir wrote

Nice to see other people doing Xai in healthcare! Here are some of the ideas I would've loved to explore, no clue if they'd work out but they might be fun:

  • Extract features from dermatology images to be used in interpretable models -- Things like whether it's a bullseye, raised, etc. Those features could be used in an interpretable diagnosis model, potentially even a rule-based one made by doctors. Then it could be used to generate part of the clinical note. One of my coworkers did a hackathon project on it and I think it's got potential. I have no idea how to design it to be robust against errors in the feature extractor though, like if it hallucinates a bullseye.
  • I'm curious about using NLG models for explanation. Maybe one of the GPTs on Huggingface with some prompt work could do it -- something like "We're going to explain the choice of diagnosis in medical cases. Features of patient 1: ... Explanation in favor of sinusitis: ...., Features of patient N: ... Explanation in favor of covid:" It wouldn't be a real explanation but it might be more usable than existing options, especially if there's a lot of textual data from the patient

There was an interesting panel debate about Xai at ML4H this year. I don't think the recordings are online but the papers are online at least. Mihaela van der Shaar brought up some interesting work too, such as learning interpretable equations for healthcare.

3