Today while AI systems are grappling with biases that can impact real lives, this topic is so important. It was very well delivered. Thanks :)
@sourjuicy Жыл бұрын
0
@AshokTak3 жыл бұрын
I love how AI community is learning about this problem and solution for debiasing the models especially popular models in computer vision and NLP!
@nintishia3 жыл бұрын
This is not just a state-of-the-art balanced overview of the area, rather the depth of the speaker that comes from researching the area clearly shows. Thanks particularly for the algorithmic solutions part. I am curious about whether the learnt latent structure part has been further developed. Also whether training the variational layer in the autoencoder conflicts with the resampling approach in some way.
@lukeSkywalkwer3 жыл бұрын
Thanks so much for putting this online! I was wondering how the underlying distribution (frequency of values the z can take) can be estimated from the latent variables z ? (around 35:51) I mean, it's not as trivial as the distribution of z being identical to the distribution z takes in the training data, right?
@bitsbard Жыл бұрын
For those keen on this subject, you won't regret diving into "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell. It was a delight to read.
@luckychitundu10702 жыл бұрын
Great Video
@chanochbaranes60023 жыл бұрын
Another amazing video, if I wish to continue with deep learning what and where should I learn?
@AbhishekSinghSambyal6 ай бұрын
Awesome lecture. How do you create such presentations? Which app?
@harshkumaragarwal83263 жыл бұрын
I loved the cancer detection example. Thanks for the lecture :))
@busello2 жыл бұрын
Great contribution. Clear. Useful. Thank you!
@macknightxu21993 жыл бұрын
any courses on privacy-preserving when using Deep Learning?
@lotfullahandishmand7533 жыл бұрын
Thanks for your contribution and doing great work to let people to know and have latest information and knowledge about Deep learning. can we have some format with more practical and challenging problem which AI Community can go through apart from these labs, it was just a proposal. Thanks again, KEEP GOING Ava and Amini
@macknightxu21993 жыл бұрын
Awesome courses. And where can I find the something like these labs projects to have a try AI and Deep Learning which matches this series of MIT Deep Learning courses?
@kruan26613 жыл бұрын
Great video! 8:06 I don't the COCO graph is accurate, there are lots of training and application of AI in China, with their own database. Most of the time Chinese just do these kinds of research secretly.
@BitBard302 Жыл бұрын
This book is turning heads "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
@TheWayofFairness3 жыл бұрын
All of our problems begin with unfairness
@christianngnie31883 жыл бұрын
Awesome
@bhavyakaneriya89163 жыл бұрын
👍👍👍
@mehdidolati3 жыл бұрын
Who disliked the video before it begins and why?!
@Amilakasun13 жыл бұрын
These ethics are far-left liberal nonsense filled with hypocrisy. They are totally fine with AI vehicles killing men and boys to save women but throws a fit if it hires men over women in an already male-dominated field.
@jonaskoelker3 жыл бұрын
I noticed something curious: at 25:02 to about 25:30, you see a real-world distribution of hair color next to a "gold standard" sample distribution. The lecturer mentions that black hair is underrepresented in the sample. She does not mention that red hair is underrepresented, even though that is also (and evidently) true, if the diagram is anything to go by. I'm not sure what to make of this, but it stood out to me like a sore thumb.
@DM_Dmserk3 жыл бұрын
@@jonaskoelker my understanding is the lecturer’s message is to communicate that the dataset has bias, instead of trying to enumerate the problems. But yes, under-represented red hair is a problem
@terraflops3 жыл бұрын
@Alexander Amini 1. the watermelon example was excellent 2. as a transgender person, CNNs are adversarial to my gender as the models are based *only* on *cisgender* people (need for more disaggregated evaluation) 3. I don't like CNNs, and don't practice making them, as all examples and datasets are boring to me and simply binary. Talking about gender bias is also biased because transgender humans exist and gender-neutral terms exist but you would never know it in any tech/coding lecture. I am sure MIT has Transgender people in their school