You are such an intuitive explainer supported by mathematical explanations as well. Truly a gem of a teacher! Thank you!
@Rajesh2419852 жыл бұрын
One of the grey areas in AI/ML is well explained. Great work. Thank you so much.
@DeepFindr2 жыл бұрын
Thank you :)
@nintishia3 жыл бұрын
Thanks a lot for this lucid presentation on counterfactual explanations and the DiCE python toolbox. Kudos to the presenter.
@DeepFindr3 жыл бұрын
I'm happy that you liked it!
@ramiscanyakar5078 Жыл бұрын
You are amazing, great intuative explanations. Thanks a lot for your effort and time
@gabrielcornejo220613 күн бұрын
Excellent presentation. I have a question, can DICE be carried out in a model that has 3 classes instead of 2?
@dr.aravindacvnmamit3770Ай бұрын
I agree with your lecture and was very nice. How to apply for images like x-ray or ct scan
@bevansmith32103 жыл бұрын
Thanks for the great series of videos! Quick question, do you know of any studies where they have actually tested these counterfactuals on real data to see whether changing those features would indeed change the output? In other words, have they generated the counterfactuals and then actually been able to look at real life data to see if those counterfactuals actually do change the output. Thanks again. Your videos are great!
@DeepFindr3 жыл бұрын
Hi, thanks for your feedback :) Yes that would be interesting to see. Generally all the counterfactual statements (and how valid they are) depends on two things: 1. How good is the model on which the Counterfactuals are calculated 2. How similar is the train/test data to the real world data If the model has a very good performance and the distribution of the real world data is the same as for the data the model was trained on, it would say there is no doubt that for Counterfactuals a real world evidence can be found. Many of the datasets used for publications with Counterfactuals are based on real world data, for instance: - Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification (Zephyr dataset) - On Counterfactual Explanations under Predictive Multiplicity (HELOC dataset) - Model-Agnostic CFs for consequential decisions (look for coverage) - Counterfactual Explanations for Machine Learning - A review (they also talk about causality here) Hope that this is what you look for :)
@bevansmith32103 жыл бұрын
@@DeepFindr thanks for the response. That helps. I will check out those papers. Cheers.
@joshitox24982 жыл бұрын
Thank you for such a nice, simple explanation.
@m.kaschi27412 жыл бұрын
Thanks for the videos and good explanantions. Just started my Master Thesis and the process of reading through all these papers is quite tedious for me. I just can't concentrate on papers as good as on videos. And btw. your english is quite good ;) When I started watching your videos I wasn't even sure that you're german :) Where do you / did you study? :) Subbed
@DeepFindr2 жыл бұрын
Thanks! I studied at the KIT. :)
@m.kaschi27412 жыл бұрын
@@DeepFindr haha so cool, I study there too :)
@DeepFindr2 жыл бұрын
Awesome! I really liked it there. Good luck with your studies!
@suyashpandya31042 жыл бұрын
I think you have misunderstood the CF generated. It twas old to change bmi by 0.9, that mean make it 30.9 from 30 to change stroke from 0 (no stroke) to 1(stroke). This is more feasible CF also. Please let me know if I am wrong.
@DeepFindr2 жыл бұрын
Hi, no actually the value is 0.9. That's why I put in "permitted range" afterwards, to guarantee more feasible values. The CFs returned are always new data points, so not just the changes :)
@asiffaisal2693 жыл бұрын
Thats a very good explanation. Good stuff. Thank you.
@farisnurhafiz78325 ай бұрын
Is it okay to not scale the numerical data? Can we just proceed with the analysis as is?
@EigenA2 жыл бұрын
Great job!
@SUGATORAY2 жыл бұрын
Very nice presentation. Quick question: how do you get the options to run a cell in a .py file? It’s not a notebook right?
@DeepFindr2 жыл бұрын
Hi! It's VS Code Cell magic :) You simply put a #%% in the file to create cells. Here you can find more information: code.visualstudio.com/docs/python/jupyter-support-py
@rogeraylagas37983 жыл бұрын
Thank you for this fantastic series of XAI videos! I was wondering if there is the possibility of adding the minimum probability for the counterfactual decision class. So lets say we have a person with 80% stroke and we want to provide a counterfactual not only for this person to be detected as no_stroke (which could be 51% and so there is a lot of uncertainty there) but with a 90% probability of no_stroke. Is that possible?
@DeepFindr3 жыл бұрын
Hi! Thanks, I'm happy that you liked it! When it comes to Counterfactuals you can be very creative, so yes that is possible. However, I don't think that this is possible out of the box using any of the libraries during the CF generation. Some time ago I build a simple genetic algorithm that creates Counterfactuals (similar as in the paper of CertifAI) - there I could include all the constraints I wanted to add. I also used the probabilities as confidence scores for the generated Counterfactuals. In my experiments I also realized that sometimes no Counterfactuals can be found and the max probability is for instance 60%. However, you can always ask your model how certain it is about the Counterfactual. That means you could generate a couple of CFs and then simply discard the ones that fall below your threshold. Best regards!
@armagaan0092 жыл бұрын
Brilliant!!
@Sn-nw6zb2 жыл бұрын
Amazing, thanks for your clear explanations. Are there any tools for neural network for calculating counter factual?
@DeepFindr2 жыл бұрын
Hi! Yes, you can try out the CEML or DICE python libraries :)