Viewing a single comment thread. View all comments

TenaciousDwight t1_is3vui6 wrote

LIME has a lot of problems and I think it is worth mentioning more of them. As an example, this paper shows that the top features in a LIME explanation of an outcome are often neither necessary nor sufficient to cause that outcome.

2

graphicteadatasci t1_is4o6c9 wrote

Well yeah, LIME tells you about an existing model, right? So if multiple features are correlated then a model may drop one of the features and the explanations will say that the drop model has no predictive power while the correlated feature is important. But we can drop the important feature and train an equally good model (maybe even better).

1

TenaciousDwight t1_is578zw wrote

I think the paper is saying that LIME may explain a model's prediction using features that are actually of little consequence to the model. I have a feeling that this is tied to the instability problem: do 2 runs of LIME to explain the same point and get 2 significantly different explanations.

3

Visual-Arm-7375 OP t1_is4vqus wrote

But is this LIME's problem? I mean, it is the model that is not taking into account the correlated feature, not LIME. LIME just looks at the original model.

1