The Science Behind InterpretML: SHAP

  Рет қаралды 51,648

Microsoft Developer

Microsoft Developer

Күн бұрын

Пікірлер: 23
@caiyu538
@caiyu538 Жыл бұрын
It looks that shap is a brutal force search of all features, consider all kinds of combinations. Is my understanding correct.
@robertoooooooooo
@robertoooooooooo 2 жыл бұрын
Insightful, thank you
@chandrimad5776
@chandrimad5776 2 жыл бұрын
Like SHAP, there is LIME as well for interpretability of the models. Would be great if you post a video on comparing these two. I have built one credit risk model using lightgbm regressor, there I used SHAP to represent model interptetability. I would also like to use LIME for the same purpose. If you are interested, will post my kaggle link to get your feedback on my work.
@sehaconsulting
@sehaconsulting 2 жыл бұрын
Would love to see the work you did for SHAP be LIME
@missfatmissfat
@missfatmissfat Жыл бұрын
Hello I’m also interested as well 🙂
@muhammadusman7466
@muhammadusman7466 Жыл бұрын
does shaply works well with categorical features?
@juanete69
@juanete69 2 жыл бұрын
In a linear regression what is the difference (interpretation) between the SHAP value and the partial R^2?
@Lucky10279
@Lucky10279 2 жыл бұрын
In linear regression, R^2 is literally just the proportion of the variance of the probability distribution of the dependent variable(s) that can be predicted/explained by the model. Or, in other words, if I understand correctly, it's the proportial difference between the variance of the actual variable we're attempting to model and the variance of the predicted variable. So if y is what we're trying to predict and the model is y_hat = mx+b, R² = |var(y)-var(y_hat)|/var(y). It's a measure of how well the model matches the actual relationship between the independent and dependent variables. Shap values, OTH, if I'm understandingly the video correctly, don't necessarily say anything, in themselves, about how well the model fits the actual data overall -- instead they tell us how much each independent variable is affecting the predictions/classifications the model spits out.
@hen-e1v
@hen-e1v 3 жыл бұрын
great introduction of SHAP
@manuelillanes1635
@manuelillanes1635 3 жыл бұрын
Can shap values be used to interpret unsupervised models too?
@mananthakral9977
@mananthakral9977 3 жыл бұрын
No,I think we can't use it for unsupervised learning since it requires to look at the output value after each feature is added to the model.
@cassidymentus7450
@cassidymentus7450 2 жыл бұрын
For unsupervised learning there might not be a 1dimensional numeric output (e.g. credit risk). It still might be possible to make a useful one. Take PCA for example. You can define define the output as |x-x_mean| (|| = Euclidean distance aka Pythagorean formula). Shap will tell you how much principle component contributes to distance from mean... essentially it is the variance along that axis. (Depending on how you look at it.)
@SandeepPawar1
@SandeepPawar1 4 жыл бұрын
Scott - should the SHAP values for feature importance be based on training data or test data?
@lipei7704
@lipei7704 3 жыл бұрын
I believe the SHAP values for feature importance should be based on training data, Because It will cause a data breach If you choose test data.
@claustrost614
@claustrost614 3 жыл бұрын
@@lipei7704 I agree. It depends what you want to do with it. If you want to use it for more or less global feature importance/feature selection I would only use it on the training set. If you mean by "feature importance" the local sharpley values for one example of your test data you can have a look whetever it would be an outerlayer or something like that compared to your traing set.
@igoriakubovskii1958
@igoriakubovskii1958 3 жыл бұрын
How can we make a decision based on shap, when it’s not causality ?
@berniethejet
@berniethejet 2 жыл бұрын
Seems Scott confounds the term 'linear models' with 'models with no interactions'. Linear regression models can still have cross products of variables, e.g. y = a + b1 x1 + b2 x2 + b3 x1 x2.
@sebastienban6074
@sebastienban6074 2 жыл бұрын
i think he meant that they dont interact the same way trees do.
@FirstNameLastName-fv4eu
@FirstNameLastName-fv4eu 6 ай бұрын
Since when XGBoost became an AI model.
@zhanli9121
@zhanli9121 Жыл бұрын
This is some most opaque presentation of SHAP. And the presentation style is also bad -- very rare to see a presenter uses formula when the formula is not really needed. However, this guy manages to be one of them.
@samirelzein1095
@samirelzein1095 Жыл бұрын
after i wrote a similar sharp comment above, found yours. actually everyone agrees here. Good case for Microsoft staffing.
@RADENGUNGBOYKINS
@RADENGUNGBOYKINS 2 жыл бұрын
tentara tuh harus hitam
@samirelzein1095
@samirelzein1095 Жыл бұрын
the dude managed to get many questions, but didnt manage getting many compliments. Neither will i give one. dont make videos to be uploaded for Microsoft, make videos when you feel the ideas is complete and the intuition is clear and you re capable of speaking 10 mins out of hours you could keep showing your idea. wasted my time.
Explainable AI for Science and Medicine
1:15:29
Microsoft Research
Рет қаралды 48 М.
The mathematics behind Shapley Values
11:48
A Data Odyssey
Рет қаралды 33 М.
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН
It works #beatbox #tiktok
00:34
BeatboxJCOP
Рет қаралды 41 МЛН
Azure AI Studio vs Copilot Studio
18:22
Lisa Crosbie
Рет қаралды 44 М.
OAuth 2.0 and OpenID Connect (in plain English)
1:02:17
OktaDev
Рет қаралды 1,8 МЛН
5 Insanely Useful AI Tools for Research (Better Than ChatGPT)
31:05
Academic English Now
Рет қаралды 115 М.
The Science Behind InterpretML: Explainable Boosting Machine
11:44
Microsoft Developer
Рет қаралды 22 М.
Event-Driven Architecture (EDA) vs Request/Response (RR)
12:00
Confluent
Рет қаралды 178 М.
Shapley Additive Explanations (SHAP)
11:52
KIE
Рет қаралды 65 М.
Clean Architecture with ASP.NET Core 9
27:01
dotnet
Рет қаралды 35 М.
Explainable AI explained! | #4 SHAP
15:50
DeepFindr
Рет қаралды 85 М.
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19