MIT Introduction to Deep Learning (2022) | 6.S191

  Рет қаралды 621,178

Alexander Amini

Alexander Amini

Күн бұрын

Пікірлер: 233
@ashishkumarchoubey2819
@ashishkumarchoubey2819 2 жыл бұрын
Super Excited to learn! Thank you MIT folks for open sourcing your lectures for lesser fortunate folks like to learn and grow
@deeplypresent
@deeplypresent Жыл бұрын
I studied ML years ago and watched most of the available MOOC content out there. Doing a refresher. This guy is the best teacher I’ve come across!
@vladflore
@vladflore 2 жыл бұрын
Just watched the lecture and I'm amazed at how "easy" it seems to be, which says a lot about the knowledge and teaching technique of the Professor. It sounds all doable even for people who have no contact with ML and DL, like myself. Well done and a big thank you for making this available worldwide!
@usrehman5046
@usrehman5046 2 жыл бұрын
what are pre requisite of this? kindly reply
@ThriveUp1
@ThriveUp1 2 жыл бұрын
@@usrehman5046 im not sure what will be covered in this course but it wouldnt hurt to get/be familiar with the mathematics required for AI.
@usrehman5046
@usrehman5046 2 жыл бұрын
@@ThriveUp1 thanks alot for replying. Will look into it
@fckj4794
@fckj4794 2 жыл бұрын
Gg
@sarthakmohanty6691
@sarthakmohanty6691 2 жыл бұрын
Alexander is a master in presenting super complex things in a simple way, making such lectures public helps a lot . I personally have been benefited a lot .
@scrap8660
@scrap8660 2 жыл бұрын
I’m citing you in my high school project! Thank you for making these lectures public I literally can’t thank you enough
@adityachakole755
@adityachakole755 2 жыл бұрын
Never thought that someday I would be able to learn from a lecture happening at MIT but here you are. Thank you so much
@christiantutivengalvez9203
@christiantutivengalvez9203 2 жыл бұрын
I also teach deep learning and when I see classes like this it teaches me how easy it can be to explain complex things like deep learning. Thanks!!
@abhinavmishra9323
@abhinavmishra9323 2 жыл бұрын
Can u explain me something, at 42:38, the alfo shown says pick a single data point I and this step is inside loop. So if we can reach minimum using gradient, why are we taking this step inside loop,.we just need one random point?
@howardroth7524
@howardroth7524 2 жыл бұрын
@@abhinavmishra9323 Great question as it may not have been clear in the video. My understanding for stochastic gradient descent is that you randomly pick one data point at each iteration. Each iteration uses a random value pulled from the complete data set. Basically, it's not the same 'i' each time unless the randomness is messed up and randomly chooses the same 'i' over and over or you actually experience achieving a probability that was very very close to 0.
@SHIVAMKUMAR-fw9nf
@SHIVAMKUMAR-fw9nf 2 жыл бұрын
1. Dot product 2. add bias 3. apply a non-linearity
@VALedu11
@VALedu11 2 жыл бұрын
Year number 3 now that I am following 6.S191.... and I am still eagerly awaiting these lecture series. More power to you and Dr Ava..
@blas.duarte
@blas.duarte Жыл бұрын
It may be the first time that I understand the methods and definitions so easily. Great presentation.
@ajith.giove069
@ajith.giove069 2 жыл бұрын
I have to say, this is one of the best classes. I do have a subject called Deep learning at my Uni which has very good information as well just like this lecture. Thanks for the recap
@ajaytaneja111
@ajaytaneja111 2 жыл бұрын
What else could one ask as a weekend treat?!
@imvikrant17
@imvikrant17 2 жыл бұрын
Its really amazing that we have access to such high quality content available for free. Thank you and will be looking forward for the upcoming lectures.
@hitesh2313
@hitesh2313 2 жыл бұрын
areee kissan bhai kya haal chaal
@bennerliu6477
@bennerliu6477 Жыл бұрын
This is one of the best crash courses of deep learning I've ever seen, thanks for the good stuff! Please keep sharing!
@nemeziz_prime
@nemeziz_prime 2 жыл бұрын
This is unbelievable 🔥 how good could a course ever be!!
@random-ye9hb
@random-ye9hb 2 жыл бұрын
The mathematical explanation is very clear. I've already learned these concepts, but this introduction gave me more deeper understanding of these.
@venkatswaraj3054
@venkatswaraj3054 2 жыл бұрын
Finally understood how neural networks work and some basic concepts ooof!!! Thank you.
@arnavraina2615
@arnavraina2615 2 жыл бұрын
waited for this and now watching this on the night before my undergraduate practicals :)
@ChandraBhatt101
@ChandraBhatt101 2 жыл бұрын
An excellent introduction to deep learning. Crisp and clear. Thank You!
@emmajennnings5920
@emmajennnings5920 2 жыл бұрын
I appreciate the way you draw the neural network model, it's not cluttered with lines like some people draw, but in the course you haven't explained the topic of hyperparameters. thanks again!
@mishrr
@mishrr 2 жыл бұрын
Thank you Professor! Its really great to watch lectures from class while in WFH.
@jakubkahoun8383
@jakubkahoun8383 2 жыл бұрын
This is actually one of the better videos to understand this.
@WolfAtlas
@WolfAtlas 2 жыл бұрын
Working with deep learning on my master thesis even though I have no background in computer science😅, this was a fantastic introduction thanks Alexander!!
@TommyEVO3D
@TommyEVO3D 2 жыл бұрын
OMFG ! had been waiting for a while and I was thinking should I just take last year's session, but now this is coming!
@ashioyajotham
@ashioyajotham 2 жыл бұрын
So excited to be part of this cohort. Am new to Deep Learning looking for fellow enthusiasts and if anyone wishes to collaborate feel free to hit me up.
@ashioyajotham
@ashioyajotham 2 жыл бұрын
@@mohammadolaimat1063 Sure
@tsaatse
@tsaatse 2 жыл бұрын
Super excited to be here, and a great opportunity to learn more through open sourcing.
@ShaidaMuhammad
@ShaidaMuhammad 2 жыл бұрын
Good work Alexander. Keep it up. I'll be watching your whole series this year.
@mrdbourke
@mrdbourke 2 жыл бұрын
Wooohooo!!! Let's go for another year!
@翼龍
@翼龍 2 жыл бұрын
Thanks to your high quality teaching of Deep learning! it really helps a lot to understand it!
@TheVineetpandey
@TheVineetpandey 2 жыл бұрын
Hi , Alexander , you just explained deep learning in a very easy and intuitive way .
@Nachiket_upadhye_23
@Nachiket_upadhye_23 Жыл бұрын
This is amazing man. Thank you for the lectures. You have no idea how informative these lectures are for me.
@nishchalparne3436
@nishchalparne3436 2 жыл бұрын
That was the best introductory lecture on neural networks!!! Thanks for open sourcing lectures!!!!
@philippmaluta978
@philippmaluta978 2 жыл бұрын
Wow! That was a rollercoaster for mind. Best show ever!
@ptetips-o9027
@ptetips-o9027 2 жыл бұрын
This was really really helpful. Thank you MIT team. Keep up your good work.
@nomaniqbal1467
@nomaniqbal1467 2 жыл бұрын
thank you so much MIT, just got the mail and dah I am here, I have been waiting for this
@thedumbkid
@thedumbkid 2 жыл бұрын
Thanks for making such an awesome series of lecture available for free . Really loving this course and DEEP LEARNING
@pursuitofcat
@pursuitofcat Жыл бұрын
Just 5 min of this video > whole engineering course I had in college.
@mrityunjaypathak8792
@mrityunjaypathak8792 2 жыл бұрын
Many thanks for breaking down complex subject matter into easily graspable blocks !!!
@jaybhati8013
@jaybhati8013 2 жыл бұрын
Hey, thanks Alexander for this, totally worth every minute I watched it.
@douglastheartist2765
@douglastheartist2765 2 жыл бұрын
A-MA-ZING. Looking forward to the rest of the class. Thank you! :)
@ayushmishra7214
@ayushmishra7214 2 жыл бұрын
Thank you Alex!. I had been waiting for this since the new Year
@SphereofTime
@SphereofTime 7 ай бұрын
23:00 Dense Layer
@BraxtonMeyer
@BraxtonMeyer 2 жыл бұрын
Good morning nerd, pursuing my degree as a computer scientist with interest lying in this sort of things this will be epic
@NLPEngineer
@NLPEngineer 2 жыл бұрын
Set the reminder on. I'm waiting! 🙂
@ArmanKhalatyan
@ArmanKhalatyan 2 жыл бұрын
excellent! me: excited, forwarded to students students: excited 🍻👍
@PrinceYadav-xz2mb
@PrinceYadav-xz2mb 2 жыл бұрын
Thanks , finally wait is over 😊
@techsavvy9258
@techsavvy9258 2 жыл бұрын
Special thanks from South Korea 🎉
@lil_ToT-XFZ1
@lil_ToT-XFZ1 2 жыл бұрын
Very comprehensive and conscisive , thank you very much , excellent explaination
@leroywalton4348
@leroywalton4348 2 жыл бұрын
I thank you so much for putting these online.
@Qurat4k
@Qurat4k 2 жыл бұрын
Most awaited video of year
@eduardomatheusfigueira
@eduardomatheusfigueira 2 жыл бұрын
thank you guys for open sourcing this treasure!!!
@kimmi9697
@kimmi9697 Жыл бұрын
thank you for posting this lecture series!
@dkadayinthailand3082
@dkadayinthailand3082 2 жыл бұрын
Wow.Just awesome. Glad to learn from the nerds at MIT.
@ashirwadcda
@ashirwadcda 2 жыл бұрын
Thank you for this engaging lecture presentation. It helps me a lot.
@princewillinyang5993
@princewillinyang5993 2 жыл бұрын
Anticipating!!!!!!!😊😊
@paulmurff3133
@paulmurff3133 2 жыл бұрын
Thanks for open sourcing the course @MIT and @Alexander Amini and @Ava Soleimany
@BharathKumarThota-eg8jc
@BharathKumarThota-eg8jc Жыл бұрын
I fell this lectures teaches complex problem in a understandable way with basic knowledge in programming.
@kollias-liapisspyridon3727
@kollias-liapisspyridon3727 2 жыл бұрын
19:55. At this point it would help if you explain that you've chosen the first activation function which is monotonic and g(0)=0.5.
@john-franklinanusiem3304
@john-franklinanusiem3304 2 жыл бұрын
Thank you thank you, I’ve been waiting.
@researcher7410
@researcher7410 2 жыл бұрын
I'm super excited to learn deep learning model ...
@naughtynecromancer9006
@naughtynecromancer9006 2 жыл бұрын
Super excited to learn.. Luv from INDIA
@sneakyturtle6143
@sneakyturtle6143 2 жыл бұрын
Love the last course, really excited and thank you :D
@nguyenvandien8996
@nguyenvandien8996 2 жыл бұрын
Awesome, I just set the reminder.😋
@carlobaroni990
@carlobaroni990 2 жыл бұрын
Fantastic 1 lecture intro! Thank you very much!!
@k-alphatech3442
@k-alphatech3442 2 жыл бұрын
Amazing! Thanks from Brazil!
@leoyu9606
@leoyu9606 2 жыл бұрын
​Hi Alexander! I've been fan of you and Ava since this series from 2020! Looking forward to the new updates in this season. Just wondering would AlphaFold get a snippet to be introduced in detail?
@fehmidakhan8471
@fehmidakhan8471 2 жыл бұрын
I'm really excited to learn THANK YOU
@SphereofTime
@SphereofTime 7 ай бұрын
25:45
@joeoh8989
@joeoh8989 2 жыл бұрын
So W = {W_0,W_1,W_2,...W_n}, these elements are themselves weights for each respective layer? Is it right to think of W as a set containing other sets? Where W_i in W is a list of weights for the ith layer? And W* is the combination of each weights in each element of W such that we have the lowest loss? Am I understanding that right? Great presentation. Thank you so much for this, even though it can be challenging it helps to see the mathematics behind it so the viewer can go search it up later.
@E_rich
@E_rich 2 жыл бұрын
This is terrifying given the times tbh (war propaganda, etc.), hope we have a thorough way to distinguish between AI mimicry and reality...
@hasijahimanshu
@hasijahimanshu 2 жыл бұрын
Awesome content, thanks for creating and keeping it free, unlike others :)
@yashrathi6862
@yashrathi6862 2 жыл бұрын
Thanks for providing these great lectures! Are the assignments also available?
@SantoshJha1979
@SantoshJha1979 2 жыл бұрын
Hi, I am unable to download the slides, it says the GitHub page is non-operational. Could Someone help please?
@ashioyajotham
@ashioyajotham 2 жыл бұрын
Me too btw
@duoduo9058
@duoduo9058 2 жыл бұрын
I was just wondering when will this be out yesterday!
@abolfazlmohammadiseif284
@abolfazlmohammadiseif284 Жыл бұрын
دمت گرم خیلی دوره ی خوبیه. درود بر تو.
@aman_singh
@aman_singh 2 жыл бұрын
Starting of course amazing
@saurabhchopra
@saurabhchopra 2 жыл бұрын
46:02 We randomly pick 50% neurons and set their activations to zero OR we set the activations to zero on neurons with a probability of 50%? I am a bit confused. Let's say we have 10 neurons, after 50% dropout, will we use only 5 neurons OR it can be any number of neurons?
@armin1048
@armin1048 2 жыл бұрын
According to the official tensorFlow documentation on "keras.layers.dropout", 50% of input units are dropped randomly. NOT "every input has a 50% chance of being dropped", which could lead to all of them, none of them, or any number in between being dropped
@paragon9671
@paragon9671 2 жыл бұрын
I am super glad to follow this class, Thank you. I can't access the slides, I got a "404 File not found". please kindly help look into the link. Thank you.
@AAmini
@AAmini 2 жыл бұрын
Thanks for letting me know, I'm working on fixing that ASAP and getting the slides published
@howardroth7524
@howardroth7524 2 жыл бұрын
Good lecture. Enjoyed it immensely.
@rahbar2002
@rahbar2002 2 жыл бұрын
nice lecture and relatable, cool teaching skills
@RichardTasgal
@RichardTasgal 2 жыл бұрын
The effect of the cascaded layers is to create more complex nonlinear functions than the simple activation functions that are used at each layer. Yes? I wonder if there a motivation for that sort of nonlinear function other than the fact that real-life biological neurons work approximately as a threshold function of a sum of inputs? Not that I have an objection, but why not improve on nature (if there is a way to do so)?
@bcinerd
@bcinerd 2 жыл бұрын
can't able to see lec1 slides , github page is not available please help!
@avoidprogress6002
@avoidprogress6002 2 жыл бұрын
All the non-linear activation functions (i think thats what they were called) are non-negative, in 2 cases there images are even contained in a compact interval. But what if I want my Network to output a Real number, unbounded, possibly negative?
@AAmini
@AAmini 2 жыл бұрын
Great question, the solution is to simply remove your activation function from the last layer only (but keep on all other layers). This way your output can be unbounded.
@jayshan3645
@jayshan3645 2 жыл бұрын
It would be great if you provide an explanation of the notations used in the equations. ( what do' i' and 'j' represent?)
@williamgomez6226
@williamgomez6226 2 жыл бұрын
Are you for real MIT????? I LOVE U
@uwakmfonutuk4939
@uwakmfonutuk4939 2 жыл бұрын
I think transformers have become foundational to the field and should probably be added to the curriculum of this course. Maybe in the future?
@AAmini
@AAmini 2 жыл бұрын
They will be a prominent part of the new lecture 2 (released next Friday) which is about time series modeling! Definitely check that one out once it premieres if you're interested!
@parvanehranjbar6113
@parvanehranjbar6113 2 жыл бұрын
Can you add Persian subtitle to your videos . It would be very helpful for Iranians and Persian speakers who watching you. Thank you .
@ghadeerelsalhawy
@ghadeerelsalhawy Жыл бұрын
Thank you so much for the amazing lecture.
@osama82405
@osama82405 2 жыл бұрын
can't able to see lec1 slides on website.
@youssefyounes4640
@youssefyounes4640 2 жыл бұрын
i didn't understand the importance of the bias
@AAmini
@AAmini 2 жыл бұрын
If all inputs (x) are zero, and we want our output (y) to be non-zero, then bias is the only way to accomplish that -- otherwise y=Wx will always equal zero as well.
@youssefyounes4640
@youssefyounes4640 2 жыл бұрын
@@AAmini Thanks
@davidniquot6423
@davidniquot6423 2 жыл бұрын
I find that this lecture need to be optimized, it can be comrpessed in one quarter of the time easily. Some simple ideas are repeated 2 or 3 times... there is no need for that. For example about the learning rate ... just apply some dychotomy here and you'll have a kind of adaptive learning rate with no chance to be stuck.
@054siddarth3
@054siddarth3 2 жыл бұрын
Thank you for the course :D
@marioandresheviacavieres1923
@marioandresheviacavieres1923 2 жыл бұрын
Muchas Gracias!
@learning6210
@learning6210 2 жыл бұрын
thank you for the series :-)
@shashihnt
@shashihnt 2 жыл бұрын
Can one think of dropout as training a model with less number of parameters(more parameter leads to more complexity that in turn leads to overfitting) and these parameters changes during the training step ?
@RichardTasgal
@RichardTasgal 2 жыл бұрын
Why not use Newton-Raphson? OK, you need second derivatives, too, and you need to invert a matrix. But the illustrations show tens or hundreds of gradient decent steps, which probably adds up to more than the four data points you need to get first and second derivatives in each direction (and generalization to Hessian for more dimensions) and then use it to jump close to the minimum at each step. I know that they are just illustrations, but still... It's not like you are leaving the minimization/optimization to black box functions that tensorflow users are best advised to not bother getting into. I'm writing this without doing a decent calculation the computational cost. I hope I'm clear enough and not going down a line of thought that has already been proven not to be helpful.
@nkamganstanis184
@nkamganstanis184 2 жыл бұрын
Hello sir, the lectures, slides, and lab materials are not yet available
@helloansuman
@helloansuman 2 жыл бұрын
Can we have the lab video?
@caiomar
@caiomar 2 жыл бұрын
Let's go baby! (:
@DanielSerratto
@DanielSerratto 2 жыл бұрын
I’m curious about what is the meaning of code S191 in the MIT?
MIT 6.S191 (2022): Recurrent Neural Networks and Transformers
58:18
Alexander Amini
Рет қаралды 255 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 743 М.
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 13 МЛН
Don't underestimate anyone
00:47
奇軒Tricking
Рет қаралды 21 МЛН
This Game Is Wild...
00:19
MrBeast
Рет қаралды 186 МЛН
Machine Learning for Everybody - Full Course
3:53:53
freeCodeCamp.org
Рет қаралды 8 МЛН
MIT 6.S191 (2023): Reinforcement Learning
57:33
Alexander Amini
Рет қаралды 136 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,8 МЛН
Introduction to Poker Theory
30:49
MIT OpenCourseWare
Рет қаралды 1,4 МЛН
MIT Introduction to Deep Learning (2023) | 6.S191
58:12
Alexander Amini
Рет қаралды 2 МЛН
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 285 М.
MIT 6.S191 (2022): Reinforcement Learning
54:53
Alexander Amini
Рет қаралды 84 М.
MIT 6.S191: Convolutional Neural Networks
1:07:58
Alexander Amini
Рет қаралды 95 М.
26. Chernobyl - How It Happened
54:24
MIT OpenCourseWare
Рет қаралды 2,9 МЛН
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 1,8 МЛН
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 13 МЛН