Submitted by nature_and_carnatic t3_zyp602 in MachineLearning
nature_and_carnatic OP t1_j27hfjn wrote
Reply to comment by zyl1024 in [D] Interpretability research ideas by nature_and_carnatic
Yeah sadly most of the already existing techniques like LIME, shap etc.. didn't give good results .
> What do you want out of xai? For now i need to explain the model that we are using but thats leading no where in 3 months of time:( . But what i am thinking was either improvise an existing xai technique . Honestly this was the only things that come in mind regarding research in xai . But as you are already working in xai i am open to your recommendations:) . I am an undergrad so have some time to work on those before i start my phd (I mean thinking of doing phd not yet decided)
>In what ways they are failing? Ok honestly the dataset is some ultrasound thyroid images which itself cant be interpretable by an average human . And some how we asked radiologist to segment the cancerous portion and lime, shap aren't matching the portions segmented by radiologist.
Our black box model consists of some Pass band descrete cosine tranform feature vectors and then use some normal svm to classify it. And i haven't found many techniques in xai where black box model isnt NN-based .
I am open to any suggestions from your side and any lead will be highly appreciated
[deleted] t1_j2ad2pt wrote
[deleted]
Viewing a single comment thread. View all comments