Comments

You must log in or register to comment.

flinsypop t1_is0cbrw wrote

This is a nice brief introduction. Where you could improve is showing how each part of the presentation is mapped to code, so people can play around with it. My advice would be to link to the lime tutorials and fill in any gaps with notebooks of your own. If you can direct your viewers to be practice what you explain and also have safety nets where you explain common problems and solutions, you can differentiate your content from the dozens of other content creators explaining the same tools and concepts you are.

I do have bias here though, I dislike slides and slides of mathematical notation but you did a good job of breaking it up with visuals in the middle. However, in the second half, it would have been better if you referred back to examples from the first half as you go along. Using different examples can be fine but, in my experience explaining it to colleagues, the lack of continuity can stun lock people. For example people might wonder what exactly the perturbated dataset could look like for the images at the start. You could show the output of lime for the husky picture compared to the same picture with added noise that would have been generated in the "perturbed" dataset.

14

Visual-Arm-7375 OP t1_is0jefe wrote

Thank you very much for your opinion u/flinsypop! Appreciate it a lot!
I completely agree with what you say, and I'll take that in mind for the next videos.

4

danabxy t1_is1ztl8 wrote

I have used LIME and I thought this explanation was a great start. I agree with flinsypop that an in-depth example would be good. Josh Starmer on Statquest does a nice job of this, for example when he explains xgboost.

Also, I think if you're interested in continuing to make educational YouTube videos, you should work on your accent. It is quite hard for a native speaker to understand and I expect non-natives would struggle to access your great content. I offer this only to help you so sorry if you take it poorly.

Keep up the great work!

3

Visual-Arm-7375 OP t1_is2qdv9 wrote

Thank you very much for the comment! I don't take it badly at all. Constructive criticism is always welcome. Regarding my accent, I really try to give the best of myself but it's innate haha, I'm sorry. I will try to improve it although I don't know how. Also, I recorded with the laptop, I have no micro :( However, I provided detailed subtitles in the video!!

1

TenaciousDwight t1_is3vui6 wrote

LIME has a lot of problems and I think it is worth mentioning more of them. As an example, this paper shows that the top features in a LIME explanation of an outcome are often neither necessary nor sufficient to cause that outcome.

2

graphicteadatasci t1_is4o6c9 wrote

Well yeah, LIME tells you about an existing model, right? So if multiple features are correlated then a model may drop one of the features and the explanations will say that the drop model has no predictive power while the correlated feature is important. But we can drop the important feature and train an equally good model (maybe even better).

1

TenaciousDwight t1_is578zw wrote

I think the paper is saying that LIME may explain a model's prediction using features that are actually of little consequence to the model. I have a feeling that this is tied to the instability problem: do 2 runs of LIME to explain the same point and get 2 significantly different explanations.

3

Visual-Arm-7375 OP t1_is4vqus wrote

But is this LIME's problem? I mean, it is the model that is not taking into account the correlated feature, not LIME. LIME just looks at the original model.

1