MIT 6.S191: AI Bias and Fairness

  Рет қаралды 49,502

Alexander Amini

Alexander Amini

Күн бұрын

Пікірлер: 24
@anshusingh3403
@anshusingh3403 3 жыл бұрын
Today while AI systems are grappling with biases that can impact real lives, this topic is so important. It was very well delivered. Thanks :)
@sourjuicy
@sourjuicy Жыл бұрын
0
@AshokTak
@AshokTak 3 жыл бұрын
I love how AI community is learning about this problem and solution for debiasing the models especially popular models in computer vision and NLP!
@nintishia
@nintishia 3 жыл бұрын
This is not just a state-of-the-art balanced overview of the area, rather the depth of the speaker that comes from researching the area clearly shows. Thanks particularly for the algorithmic solutions part. I am curious about whether the learnt latent structure part has been further developed. Also whether training the variational layer in the autoencoder conflicts with the resampling approach in some way.
@lukeSkywalkwer
@lukeSkywalkwer 3 жыл бұрын
Thanks so much for putting this online! I was wondering how the underlying distribution (frequency of values the z can take) can be estimated from the latent variables z ? (around 35:51) I mean, it's not as trivial as the distribution of z being identical to the distribution z takes in the training data, right?
@bitsbard
@bitsbard Жыл бұрын
For those keen on this subject, you won't regret diving into "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell. It was a delight to read.
@luckychitundu1070
@luckychitundu1070 2 жыл бұрын
Great Video
@chanochbaranes6002
@chanochbaranes6002 3 жыл бұрын
Another amazing video, if I wish to continue with deep learning what and where should I learn?
@AbhishekSinghSambyal
@AbhishekSinghSambyal 6 ай бұрын
Awesome lecture. How do you create such presentations? Which app?
@harshkumaragarwal8326
@harshkumaragarwal8326 3 жыл бұрын
I loved the cancer detection example. Thanks for the lecture :))
@busello
@busello 2 жыл бұрын
Great contribution. Clear. Useful. Thank you!
@macknightxu2199
@macknightxu2199 3 жыл бұрын
any courses on privacy-preserving when using Deep Learning?
@lotfullahandishmand753
@lotfullahandishmand753 3 жыл бұрын
Thanks for your contribution and doing great work to let people to know and have latest information and knowledge about Deep learning. can we have some format with more practical and challenging problem which AI Community can go through apart from these labs, it was just a proposal. Thanks again, KEEP GOING Ava and Amini
@macknightxu2199
@macknightxu2199 3 жыл бұрын
Awesome courses. And where can I find the something like these labs projects to have a try AI and Deep Learning which matches this series of MIT Deep Learning courses?
@kruan2661
@kruan2661 3 жыл бұрын
Great video! 8:06 I don't the COCO graph is accurate, there are lots of training and application of AI in China, with their own database. Most of the time Chinese just do these kinds of research secretly.
@BitBard302
@BitBard302 Жыл бұрын
This book is turning heads "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
@TheWayofFairness
@TheWayofFairness 3 жыл бұрын
All of our problems begin with unfairness
@christianngnie3188
@christianngnie3188 3 жыл бұрын
Awesome
@bhavyakaneriya8916
@bhavyakaneriya8916 3 жыл бұрын
👍👍👍
@mehdidolati
@mehdidolati 3 жыл бұрын
Who disliked the video before it begins and why?!
@Amilakasun1
@Amilakasun1 3 жыл бұрын
These ethics are far-left liberal nonsense filled with hypocrisy. They are totally fine with AI vehicles killing men and boys to save women but throws a fit if it hires men over women in an already male-dominated field.
@jonaskoelker
@jonaskoelker 3 жыл бұрын
I noticed something curious: at 25:02 to about 25:30, you see a real-world distribution of hair color next to a "gold standard" sample distribution. The lecturer mentions that black hair is underrepresented in the sample. She does not mention that red hair is underrepresented, even though that is also (and evidently) true, if the diagram is anything to go by. I'm not sure what to make of this, but it stood out to me like a sore thumb.
@DM_Dmserk
@DM_Dmserk 3 жыл бұрын
@@jonaskoelker my understanding is the lecturer’s message is to communicate that the dataset has bias, instead of trying to enumerate the problems. But yes, under-represented red hair is a problem
@terraflops
@terraflops 3 жыл бұрын
@Alexander Amini 1. the watermelon example was excellent 2. as a transgender person, CNNs are adversarial to my gender as the models are based *only* on *cisgender* people (need for more disaggregated evaluation) 3. I don't like CNNs, and don't practice making them, as all examples and datasets are boring to me and simply binary. Talking about gender bias is also biased because transgender humans exist and gender-neutral terms exist but you would never know it in any tech/coding lecture. I am sure MIT has Transgender people in their school
MIT 6.S191: Deep CPCFG for Information Extraction
40:58
Alexander Amini
Рет қаралды 20 М.
MIT 6.S191 (2023): Deep Generative Modeling
59:52
Alexander Amini
Рет қаралды 312 М.
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН
How to avoid bias in Machine Learning
14:46
Don Woodlock
Рет қаралды 4,8 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 842 М.
MIT 6.S191: Evidential Deep Learning and Uncertainty
48:52
Alexander Amini
Рет қаралды 60 М.
MIT 6.S191: Taming Dataset Bias via Domain Adaptation
42:51
Alexander Amini
Рет қаралды 23 М.
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 517 М.
How might LLMs store facts | DL7
22:43
3Blue1Brown
Рет қаралды 976 М.
MIT 6.S191: Reinforcement Learning
1:00:19
Alexander Amini
Рет қаралды 70 М.
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
1:44:31
MIT Introduction to Deep Learning (2023) | 6.S191
58:12
Alexander Amini
Рет қаралды 2 МЛН