Explainable AI Cheat Sheet - Five Key Categories

  Рет қаралды 45,158

Jay Alammar

Jay Alammar

Күн бұрын

Пікірлер: 38
@prasadjayanti
@prasadjayanti 3 жыл бұрын
I am a data scientist & really appreciate your work ! Keep good work going !
@mrunalinigarud1162
@mrunalinigarud1162 3 жыл бұрын
Best blog and guidance video for AI
@kartavyabhatt7818
@kartavyabhatt7818 3 жыл бұрын
Thank you very much for the video. 9:57 yeah a dedicated video for each of the methods will be really great!
@its_me7363
@its_me7363 3 жыл бұрын
Now I think it would be great if Jay can make video on SHAP explanability and usage...hope you have time to accept this request.
@NishantKumar-mp9zg
@NishantKumar-mp9zg 3 жыл бұрын
+1 I'll also be looking forward to it.
@arp_ai
@arp_ai 3 жыл бұрын
I'd certainly love to learn more about it at some point
@its_me7363
@its_me7363 3 жыл бұрын
@@arp_ai will wait for your video for this topic
@kokoko5690
@kokoko5690 Жыл бұрын
Thank you for ur video it's really well organized and easy to understand
@metallica42425
@metallica42425 3 жыл бұрын
Really appreciate these resources! Thanks for always explaining things so clearly!
@jeanpauldelamarre6583
@jeanpauldelamarre6583 3 жыл бұрын
Explainable AI is not only based on neural networks. Everyone wants to make neural networks explainable which is not the case by design. You also have to consider other types of models like rule based models (expert systems) or even probabilistic models which are explainable by design.
@sheldonsebastian7232
@sheldonsebastian7232 3 жыл бұрын
Found this channel via Linkedin post. It was a good find!
@incase3007
@incase3007 Жыл бұрын
great video. highly informative !
@Ninaadiaries
@Ninaadiaries 11 ай бұрын
Thanks for the information. It was helpful for me :)
@TusharKale9
@TusharKale9 3 жыл бұрын
Very important topic covered in good details. Thank you
@palomoshoeshoe8985
@palomoshoeshoe8985 6 ай бұрын
Thank you so much for your contribution, i really appreciate it.
@camoha8313
@camoha8313 Жыл бұрын
Thanks for this video (love the john coltrane )
@francistembo650
@francistembo650 3 жыл бұрын
Thanks man!
@mpalaourg8597
@mpalaourg8597 2 жыл бұрын
Nice video! But even better the resources which were referenced! Thank you...
@dev0nul162
@dev0nul162 2 жыл бұрын
Thank you for what you have provided here! The links are providing tremendous added value to your videos.
@omyeues
@omyeues 2 жыл бұрын
Very interesting ! Thank you for sharing
@chathurijayaweera1590
@chathurijayaweera1590 3 жыл бұрын
Very informative and easily understandable. Thank you for making this video
@TheSiddhartha2u
@TheSiddhartha2u 3 жыл бұрын
Thank You for nice and easy information. I was looking for such information 👍
@muhammadomar9552
@muhammadomar9552 2 жыл бұрын
Thanks for knowledge sharing. Where decision trees lie in cheat sheet?
@ottunrasheed4076
@ottunrasheed4076 3 жыл бұрын
Interesting content. I am looking forward to the paper reading videos
@raminbakhtiyari5429
@raminbakhtiyari5429 3 жыл бұрын
just fascinating
@MeriJ-ze5dd
@MeriJ-ze5dd 3 жыл бұрын
Thanks Jay. Amazing video. I have a question though: why the pretraining in gpt-3 is called unsupervised learning?it works on labelled data, so I think it should be a supervised learning task.
@arp_ai
@arp_ai 3 жыл бұрын
It's better called self-supervised learning nowadays. It's unsupervised in the same way that word2vec is unsupervised -- it is not trained on an explicitly labeled dataset, but rather on on examples generated from free text.
@sanjanasuresh5565
@sanjanasuresh5565 2 жыл бұрын
Hello! Can you please make a video on interpretability of unsupervised ML models
@balapranav5364
@balapranav5364 3 жыл бұрын
Hi sir, Shap gives you the info same like feature importance results?
@arp_ai
@arp_ai 3 жыл бұрын
SHAP is a method of obtaining feature importance, yes.
@deepbayes6808
@deepbayes6808 2 жыл бұрын
Why logistics or linear regression are considered interpretable? If you have 1000x of features, how can you interpret a non-sparse weight matrix?
@juanpablopajaro9229
@juanpablopajaro9229 2 жыл бұрын
Jay, I was exploring SHAP for explainable deep learning, and it didn´t work. The git mentioned an update in TensorFlow that conflicts with SHAP. What do you know about that?
@yastradamus
@yastradamus 3 жыл бұрын
that John Coltrane cover in the back though!
@moustafa_shomer
@moustafa_shomer 3 жыл бұрын
The example based part was kind of shallow, you didn't talk about how they figure out the specific flaws in the model
@arp_ai
@arp_ai 3 жыл бұрын
That tends to be a different problem which can either arise from model but potentially also from the dataset. The problem becomes more model debugging. XAI is one debugging tool, but there are many others, especially deep examinations of the data.
@abhilashsanap1207
@abhilashsanap1207 3 жыл бұрын
Some day you should do a video about the background in your videos. Please.
@RahilQodus
@RahilQodus Жыл бұрын
Thanks a lot. It was a great introduction and really helped me🙏🫀
Neural Activations & Dataset Examples
13:13
Jay Alammar
Рет қаралды 3,3 М.
The Narrated Transformer Language Model
29:30
Jay Alammar
Рет қаралды 317 М.
УДИВИЛ ВСЕХ СВОИМ УХОДОМ!😳 #shorts
00:49
Talk is Cheap: an AI short film using HeyGen and ChatGPT
5:47
Understanding LIME | Explainable AI
14:14
aleixnieto
Рет қаралды 25 М.
Learn Machine Learning Like a GENIUS and Not Waste Time
15:03
Infinite Codes
Рет қаралды 141 М.
Interpretable vs Explainable Machine Learning
7:07
A Data Odyssey
Рет қаралды 24 М.
AI, Machine Learning, Deep Learning and Generative AI Explained
10:01
IBM Technology
Рет қаралды 614 М.