MIT 6.S191: Uncertainty in Deep Learning

  Рет қаралды 32,790

Alexander Amini

Alexander Amini

Күн бұрын

Пікірлер: 14
@ajit60w
@ajit60w 2 жыл бұрын
O.O.D means x was not even in the training set. P_test(y.x) n.eq P_train (y.x) may also mean wrong classification or open set i.e. not seen during training(feature vector not within bounds of vectors in the training set)
@vimukthirandika872
@vimukthirandika872 2 жыл бұрын
Thank you MIT
@zigzag4273
@zigzag4273 Жыл бұрын
Hey Alex. Hope you're well. Is the 2023 course going to be free too? If yes, when does it go live?
@AAmini
@AAmini Жыл бұрын
Thanks! We are actually announcing the premiere today! The first release will be March 10 and a new lecture will be released every Friday at 10am ET.
@TheEightSixEight
@TheEightSixEight 2 жыл бұрын
Please post the slides as indicated in the URL descriptor. Thank you.
@gulsenaaltntas5398
@gulsenaaltntas5398 2 жыл бұрын
You can find the slides from the NeurIPS tutorial here: docs.google.com/presentation/d/1savivnNqKtYgPzxrqQU8w_sObx1t0Ahq76gZFNTo960
@QuantAI-kp8xt
@QuantAI-kp8xt 5 ай бұрын
Very well done. Thank you.
@vyacheslavli9254
@vyacheslavli9254 Жыл бұрын
In the deep ensemble method the uncertainty corresponds to which particular classifier? Is it an assumption that the resulting uncertainty corresponds to the arcitecture with a near to optimal hyperparameters? It rationally should but overall sounds very handwavy. On top of that it is an uncertainty of the classifier evaluated on a training domain. How does it change on the OOD dataset?
@thatapuguy2768
@thatapuguy2768 Жыл бұрын
Can someone answer my basic question? The speaker defines confidence as predicted probability of correctness. I am guessing this is NOT the same as yprob, which is predicted probability of postive class that a trained model returns for every test instance. So, how does one get an estimate the confidence?
@anvarkurmukov2438
@anvarkurmukov2438 Жыл бұрын
If you are referring to 13:30, then for a binary classification confidence is exactly what you are saying, p(y=1|x).
@SunilKalmady
@SunilKalmady Жыл бұрын
​@anvarkurmukov2438 Thanks for answering. I guess it is a bit about terminology. In this notion of confidence, overfitted models will confidently make wrong predictions. I was referring to uncertainity in those predictions aka confidence bounds of predicted scores. I have since figured out how to estimate those by bootstrapping.
@nikteshy9131
@nikteshy9131 2 жыл бұрын
Спасибо )) MIT ))))))
@drxplorer778
@drxplorer778 11 ай бұрын
This tutorial saved my ass
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 ай бұрын
he is really sloppy in his explanation. not really trying to be clear.
MIT 6.S191 (2021): Introduction to Deep Learning
56:36
Alexander Amini
Рет қаралды 559 М.
MIT 6.S191: Evidential Deep Learning and Uncertainty
48:52
Alexander Amini
Рет қаралды 60 М.
А я думаю что за звук такой знакомый? 😂😂😂
00:15
Денис Кукояка
Рет қаралды 4,7 МЛН
Farmer narrowly escapes tiger attack
00:20
CTV News
Рет қаралды 11 МЛН
Long Nails 💅🏻 #shorts
00:50
Mr DegrEE
Рет қаралды 16 МЛН
Turn Off the Vacum And Sit Back and Laugh 🤣
00:34
SKITSFUL
Рет қаралды 7 МЛН
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,9 МЛН
MIT 6.S191: Reinforcement Learning
1:00:19
Alexander Amini
Рет қаралды 58 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 743 М.
Uncertainty Quantification in Nuclear Engineering Applications
30:50
Nuclear Engineering Lectures
Рет қаралды 1,4 М.
MIT 6.S191 (2023): Robust and Trustworthy Deep Learning
53:50
Alexander Amini
Рет қаралды 90 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,8 МЛН
6. Monte Carlo Simulation
50:05
MIT OpenCourseWare
Рет қаралды 2 МЛН
А я думаю что за звук такой знакомый? 😂😂😂
00:15
Денис Кукояка
Рет қаралды 4,7 МЛН