You are the best. I wish you health and happiness, dear Dr. Karimi.🙏
@amirhkarimi6 күн бұрын
I’m glad you enjoyed the pod 🙏🏼
@getiyeabule62608 ай бұрын
can you share me the ppt
@amirrezamohammadi Жыл бұрын
Nice and Coherent talk, Specially enjoyed the idea to link the concepts and program synthesis to XAI. That would be the future of the XAI considering the LLM breakthroughs. Thanks Amir-Hossein Jan :)
@amirhosseinahmadi6759 Жыл бұрын
Too much for a first year data science student at Laurier to understand! Shine on Amir 💫
@amirhkarimi Жыл бұрын
Thanks Amirhossein🙏🏼🙏🏼 same to you, keep rocking 💪🏼
@LongTran-tr8sx2 жыл бұрын
great video, thanks for your contribution!
@DistortedV123 жыл бұрын
volume too low
@amirhkarimi Жыл бұрын
🫠
@bingyangwen9584 жыл бұрын
Great talk! Can I ask a couple of questions? Do you think the method of counterfactual explanations implicitly assume the causal relation: X->Y, where X is input variables and Y is the model's output.
@amirhkarimi4 жыл бұрын
Indeed, from a temporal perspective, we may consider that X's precede Y and are therefore a cause (at least when the model h is involved). However, we don't rely on this relationship when deriving the structural counterfactual, we only use h() to solve the optimization problem. Does this help answer you question?
@bingyangwen9584 жыл бұрын
@@amirhkarimi Yes, thanks for your explanation!
@علیرضادربندی-ث8ب4 жыл бұрын
آفرین
@Cipherweave4 жыл бұрын
its sooo beautiful. Im glad having a brother like you :)
@Cipherweave4 жыл бұрын
oooooooooffffffffffffffffffffffffffffffff
@florianro.91855 жыл бұрын
Very nice... Are there any other papers you can suggest for counterfactual explanations? I already looked at CertifAI, MACE, FACE, DiCE, Foil Trees and CLEAR. Thanks
@amirhkarimi5 жыл бұрын
Those are all great options! Perhaps you would find these interesting as well: - arxiv.org/abs/1706.06691 - arxiv.org/abs/1711.00399 - arxiv.org/abs/1712.08443 - arxiv.org/abs/1809.06514 - arxiv.org/abs/1907.09615 - arxiv.org/abs/1910.00057