Explainable AI explained! | #5 Counterfactual explanations and adversarial attacks

  Рет қаралды 23,431

DeepFindr

DeepFindr

Күн бұрын

Пікірлер: 29
@abhishekbhatia651
@abhishekbhatia651 3 жыл бұрын
You are such an intuitive explainer supported by mathematical explanations as well. Truly a gem of a teacher! Thank you!
@Rajesh241985
@Rajesh241985 2 жыл бұрын
One of the grey areas in AI/ML is well explained. Great work. Thank you so much.
@DeepFindr
@DeepFindr 2 жыл бұрын
Thank you :)
@nintishia
@nintishia 3 жыл бұрын
Thanks a lot for this lucid presentation on counterfactual explanations and the DiCE python toolbox. Kudos to the presenter.
@DeepFindr
@DeepFindr 3 жыл бұрын
I'm happy that you liked it!
@ramiscanyakar5078
@ramiscanyakar5078 Жыл бұрын
You are amazing, great intuative explanations. Thanks a lot for your effort and time
@gabrielcornejo2206
@gabrielcornejo2206 13 күн бұрын
Excellent presentation. I have a question, can DICE be carried out in a model that has 3 classes instead of 2?
@dr.aravindacvnmamit3770
@dr.aravindacvnmamit3770 Ай бұрын
I agree with your lecture and was very nice. How to apply for images like x-ray or ct scan
@bevansmith3210
@bevansmith3210 3 жыл бұрын
Thanks for the great series of videos! Quick question, do you know of any studies where they have actually tested these counterfactuals on real data to see whether changing those features would indeed change the output? In other words, have they generated the counterfactuals and then actually been able to look at real life data to see if those counterfactuals actually do change the output. Thanks again. Your videos are great!
@DeepFindr
@DeepFindr 3 жыл бұрын
Hi, thanks for your feedback :) Yes that would be interesting to see. Generally all the counterfactual statements (and how valid they are) depends on two things: 1. How good is the model on which the Counterfactuals are calculated 2. How similar is the train/test data to the real world data If the model has a very good performance and the distribution of the real world data is the same as for the data the model was trained on, it would say there is no doubt that for Counterfactuals a real world evidence can be found. Many of the datasets used for publications with Counterfactuals are based on real world data, for instance: - Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification (Zephyr dataset) - On Counterfactual Explanations under Predictive Multiplicity (HELOC dataset) - Model-Agnostic CFs for consequential decisions (look for coverage) - Counterfactual Explanations for Machine Learning - A review (they also talk about causality here) Hope that this is what you look for :)
@bevansmith3210
@bevansmith3210 3 жыл бұрын
@@DeepFindr thanks for the response. That helps. I will check out those papers. Cheers.
@joshitox2498
@joshitox2498 2 жыл бұрын
Thank you for such a nice, simple explanation.
@m.kaschi2741
@m.kaschi2741 2 жыл бұрын
Thanks for the videos and good explanantions. Just started my Master Thesis and the process of reading through all these papers is quite tedious for me. I just can't concentrate on papers as good as on videos. And btw. your english is quite good ;) When I started watching your videos I wasn't even sure that you're german :) Where do you / did you study? :) Subbed
@DeepFindr
@DeepFindr 2 жыл бұрын
Thanks! I studied at the KIT. :)
@m.kaschi2741
@m.kaschi2741 2 жыл бұрын
@@DeepFindr haha so cool, I study there too :)
@DeepFindr
@DeepFindr 2 жыл бұрын
Awesome! I really liked it there. Good luck with your studies!
@suyashpandya3104
@suyashpandya3104 2 жыл бұрын
I think you have misunderstood the CF generated. It twas old to change bmi by 0.9, that mean make it 30.9 from 30 to change stroke from 0 (no stroke) to 1(stroke). This is more feasible CF also. Please let me know if I am wrong.
@DeepFindr
@DeepFindr 2 жыл бұрын
Hi, no actually the value is 0.9. That's why I put in "permitted range" afterwards, to guarantee more feasible values. The CFs returned are always new data points, so not just the changes :)
@asiffaisal269
@asiffaisal269 3 жыл бұрын
Thats a very good explanation. Good stuff. Thank you.
@farisnurhafiz7832
@farisnurhafiz7832 5 ай бұрын
Is it okay to not scale the numerical data? Can we just proceed with the analysis as is?
@EigenA
@EigenA 2 жыл бұрын
Great job!
@SUGATORAY
@SUGATORAY 2 жыл бұрын
Very nice presentation. Quick question: how do you get the options to run a cell in a .py file? It’s not a notebook right?
@DeepFindr
@DeepFindr 2 жыл бұрын
Hi! It's VS Code Cell magic :) You simply put a #%% in the file to create cells. Here you can find more information: code.visualstudio.com/docs/python/jupyter-support-py
@rogeraylagas3798
@rogeraylagas3798 3 жыл бұрын
Thank you for this fantastic series of XAI videos! I was wondering if there is the possibility of adding the minimum probability for the counterfactual decision class. So lets say we have a person with 80% stroke and we want to provide a counterfactual not only for this person to be detected as no_stroke (which could be 51% and so there is a lot of uncertainty there) but with a 90% probability of no_stroke. Is that possible?
@DeepFindr
@DeepFindr 3 жыл бұрын
Hi! Thanks, I'm happy that you liked it! When it comes to Counterfactuals you can be very creative, so yes that is possible. However, I don't think that this is possible out of the box using any of the libraries during the CF generation. Some time ago I build a simple genetic algorithm that creates Counterfactuals (similar as in the paper of CertifAI) - there I could include all the constraints I wanted to add. I also used the probabilities as confidence scores for the generated Counterfactuals. In my experiments I also realized that sometimes no Counterfactuals can be found and the max probability is for instance 60%. However, you can always ask your model how certain it is about the Counterfactual. That means you could generate a couple of CFs and then simply discard the ones that fall below your threshold. Best regards!
@armagaan009
@armagaan009 2 жыл бұрын
Brilliant!!
@Sn-nw6zb
@Sn-nw6zb 2 жыл бұрын
Amazing, thanks for your clear explanations. Are there any tools for neural network for calculating counter factual?
@DeepFindr
@DeepFindr 2 жыл бұрын
Hi! Yes, you can try out the CEML or DICE python libraries :)
@DeepFindr
@DeepFindr 2 жыл бұрын
github.com/interpretml/DiCE
Explainable AI explained! | #3 LIME
13:59
DeepFindr
Рет қаралды 62 М.
FOREVER BUNNY
00:14
Natan por Aí
Рет қаралды 14 МЛН
Ice Cream or Surprise Trip Around the World?
00:31
Hungry FAM
Рет қаралды 17 МЛН
Walking on LEGO Be Like... #shorts #mingweirocks
00:41
mingweirocks
Рет қаралды 7 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
Generative Adversarial Networks (GANs) - Computerphile
21:21
Computerphile
Рет қаралды 650 М.
Counterfactual explanations explained
34:31
Bevan Smith 2
Рет қаралды 2,3 М.
Explainable AI explained! | #4 SHAP
15:50
DeepFindr
Рет қаралды 83 М.
Explainable AI Cheat Sheet - Five Key Categories
14:09
Jay Alammar
Рет қаралды 44 М.
What is Explainable AI?
7:30
IBM Technology
Рет қаралды 37 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,6 МЛН
SHAP with Python (Code and Explanations)
15:41
A Data Odyssey
Рет қаралды 67 М.
FOREVER BUNNY
00:14
Natan por Aí
Рет қаралды 14 МЛН