Submitted by nature_and_carnatic t3_zyp602 in MachineLearning
I worked on explainable ai for healthcare related project. Nothing fancier to be honest just few already existing xai models . But i would like to continue research in the field of interpretability. Does anyone have any idea on how to proceed further? And if someone has any ideas in mind please feel free to share so that it will be useful to others also . Thanks
zyl1024 t1_j27fudl wrote
If you've tried some existing xai techniques, I bet that you are left deeply unsatisfied by most of them. I work on interpretability, I am convinced that most (maybe all?) methods don't work right now, but I am also convinced that XAI in general would work eventually. So I would recommend start from the basics: what do you want out of XAI, and in what ways are they failing? Trying to answer these two questions would surface a lot of research questions.