Super Excited to learn! Thank you MIT folks for open sourcing your lectures for lesser fortunate folks like to learn and grow
@deeplypresent Жыл бұрын
I studied ML years ago and watched most of the available MOOC content out there. Doing a refresher. This guy is the best teacher I’ve come across!
@vladflore2 жыл бұрын
Just watched the lecture and I'm amazed at how "easy" it seems to be, which says a lot about the knowledge and teaching technique of the Professor. It sounds all doable even for people who have no contact with ML and DL, like myself. Well done and a big thank you for making this available worldwide!
@usrehman50462 жыл бұрын
what are pre requisite of this? kindly reply
@ThriveUp12 жыл бұрын
@@usrehman5046 im not sure what will be covered in this course but it wouldnt hurt to get/be familiar with the mathematics required for AI.
@usrehman50462 жыл бұрын
@@ThriveUp1 thanks alot for replying. Will look into it
@fckj47942 жыл бұрын
Gg
@sarthakmohanty66912 жыл бұрын
Alexander is a master in presenting super complex things in a simple way, making such lectures public helps a lot . I personally have been benefited a lot .
@scrap86602 жыл бұрын
I’m citing you in my high school project! Thank you for making these lectures public I literally can’t thank you enough
@adityachakole7552 жыл бұрын
Never thought that someday I would be able to learn from a lecture happening at MIT but here you are. Thank you so much
@christiantutivengalvez92032 жыл бұрын
I also teach deep learning and when I see classes like this it teaches me how easy it can be to explain complex things like deep learning. Thanks!!
@abhinavmishra93232 жыл бұрын
Can u explain me something, at 42:38, the alfo shown says pick a single data point I and this step is inside loop. So if we can reach minimum using gradient, why are we taking this step inside loop,.we just need one random point?
@howardroth75242 жыл бұрын
@@abhinavmishra9323 Great question as it may not have been clear in the video. My understanding for stochastic gradient descent is that you randomly pick one data point at each iteration. Each iteration uses a random value pulled from the complete data set. Basically, it's not the same 'i' each time unless the randomness is messed up and randomly chooses the same 'i' over and over or you actually experience achieving a probability that was very very close to 0.
@SHIVAMKUMAR-fw9nf2 жыл бұрын
1. Dot product 2. add bias 3. apply a non-linearity
@VALedu112 жыл бұрын
Year number 3 now that I am following 6.S191.... and I am still eagerly awaiting these lecture series. More power to you and Dr Ava..
@blas.duarte Жыл бұрын
It may be the first time that I understand the methods and definitions so easily. Great presentation.
@ajith.giove0692 жыл бұрын
I have to say, this is one of the best classes. I do have a subject called Deep learning at my Uni which has very good information as well just like this lecture. Thanks for the recap
@ajaytaneja1112 жыл бұрын
What else could one ask as a weekend treat?!
@imvikrant172 жыл бұрын
Its really amazing that we have access to such high quality content available for free. Thank you and will be looking forward for the upcoming lectures.
@hitesh23132 жыл бұрын
areee kissan bhai kya haal chaal
@bennerliu6477 Жыл бұрын
This is one of the best crash courses of deep learning I've ever seen, thanks for the good stuff! Please keep sharing!
@nemeziz_prime2 жыл бұрын
This is unbelievable 🔥 how good could a course ever be!!
@random-ye9hb2 жыл бұрын
The mathematical explanation is very clear. I've already learned these concepts, but this introduction gave me more deeper understanding of these.
@venkatswaraj30542 жыл бұрын
Finally understood how neural networks work and some basic concepts ooof!!! Thank you.
@arnavraina26152 жыл бұрын
waited for this and now watching this on the night before my undergraduate practicals :)
@ChandraBhatt1012 жыл бұрын
An excellent introduction to deep learning. Crisp and clear. Thank You!
@emmajennnings59202 жыл бұрын
I appreciate the way you draw the neural network model, it's not cluttered with lines like some people draw, but in the course you haven't explained the topic of hyperparameters. thanks again!
@mishrr2 жыл бұрын
Thank you Professor! Its really great to watch lectures from class while in WFH.
@jakubkahoun83832 жыл бұрын
This is actually one of the better videos to understand this.
@WolfAtlas2 жыл бұрын
Working with deep learning on my master thesis even though I have no background in computer science😅, this was a fantastic introduction thanks Alexander!!
@TommyEVO3D2 жыл бұрын
OMFG ! had been waiting for a while and I was thinking should I just take last year's session, but now this is coming!
@ashioyajotham2 жыл бұрын
So excited to be part of this cohort. Am new to Deep Learning looking for fellow enthusiasts and if anyone wishes to collaborate feel free to hit me up.
@ashioyajotham2 жыл бұрын
@@mohammadolaimat1063 Sure
@tsaatse2 жыл бұрын
Super excited to be here, and a great opportunity to learn more through open sourcing.
@ShaidaMuhammad2 жыл бұрын
Good work Alexander. Keep it up. I'll be watching your whole series this year.
@mrdbourke2 жыл бұрын
Wooohooo!!! Let's go for another year!
@翼龍2 жыл бұрын
Thanks to your high quality teaching of Deep learning! it really helps a lot to understand it!
@TheVineetpandey2 жыл бұрын
Hi , Alexander , you just explained deep learning in a very easy and intuitive way .
@Nachiket_upadhye_23 Жыл бұрын
This is amazing man. Thank you for the lectures. You have no idea how informative these lectures are for me.
@nishchalparne34362 жыл бұрын
That was the best introductory lecture on neural networks!!! Thanks for open sourcing lectures!!!!
@philippmaluta9782 жыл бұрын
Wow! That was a rollercoaster for mind. Best show ever!
@ptetips-o90272 жыл бұрын
This was really really helpful. Thank you MIT team. Keep up your good work.
@nomaniqbal14672 жыл бұрын
thank you so much MIT, just got the mail and dah I am here, I have been waiting for this
@thedumbkid2 жыл бұрын
Thanks for making such an awesome series of lecture available for free . Really loving this course and DEEP LEARNING
@pursuitofcat Жыл бұрын
Just 5 min of this video > whole engineering course I had in college.
@mrityunjaypathak87922 жыл бұрын
Many thanks for breaking down complex subject matter into easily graspable blocks !!!
@jaybhati80132 жыл бұрын
Hey, thanks Alexander for this, totally worth every minute I watched it.
@douglastheartist27652 жыл бұрын
A-MA-ZING. Looking forward to the rest of the class. Thank you! :)
@ayushmishra72142 жыл бұрын
Thank you Alex!. I had been waiting for this since the new Year
@SphereofTime7 ай бұрын
23:00 Dense Layer
@BraxtonMeyer2 жыл бұрын
Good morning nerd, pursuing my degree as a computer scientist with interest lying in this sort of things this will be epic
@NLPEngineer2 жыл бұрын
Set the reminder on. I'm waiting! 🙂
@ArmanKhalatyan2 жыл бұрын
excellent! me: excited, forwarded to students students: excited 🍻👍
@PrinceYadav-xz2mb2 жыл бұрын
Thanks , finally wait is over 😊
@techsavvy92582 жыл бұрын
Special thanks from South Korea 🎉
@lil_ToT-XFZ12 жыл бұрын
Very comprehensive and conscisive , thank you very much , excellent explaination
@leroywalton43482 жыл бұрын
I thank you so much for putting these online.
@Qurat4k2 жыл бұрын
Most awaited video of year
@eduardomatheusfigueira2 жыл бұрын
thank you guys for open sourcing this treasure!!!
@kimmi9697 Жыл бұрын
thank you for posting this lecture series!
@dkadayinthailand30822 жыл бұрын
Wow.Just awesome. Glad to learn from the nerds at MIT.
@ashirwadcda2 жыл бұрын
Thank you for this engaging lecture presentation. It helps me a lot.
@princewillinyang59932 жыл бұрын
Anticipating!!!!!!!😊😊
@paulmurff31332 жыл бұрын
Thanks for open sourcing the course @MIT and @Alexander Amini and @Ava Soleimany
@BharathKumarThota-eg8jc Жыл бұрын
I fell this lectures teaches complex problem in a understandable way with basic knowledge in programming.
@kollias-liapisspyridon37272 жыл бұрын
19:55. At this point it would help if you explain that you've chosen the first activation function which is monotonic and g(0)=0.5.
@john-franklinanusiem33042 жыл бұрын
Thank you thank you, I’ve been waiting.
@researcher74102 жыл бұрын
I'm super excited to learn deep learning model ...
@naughtynecromancer90062 жыл бұрын
Super excited to learn.. Luv from INDIA
@sneakyturtle61432 жыл бұрын
Love the last course, really excited and thank you :D
@nguyenvandien89962 жыл бұрын
Awesome, I just set the reminder.😋
@carlobaroni9902 жыл бұрын
Fantastic 1 lecture intro! Thank you very much!!
@k-alphatech34422 жыл бұрын
Amazing! Thanks from Brazil!
@leoyu96062 жыл бұрын
Hi Alexander! I've been fan of you and Ava since this series from 2020! Looking forward to the new updates in this season. Just wondering would AlphaFold get a snippet to be introduced in detail?
@fehmidakhan84712 жыл бұрын
I'm really excited to learn THANK YOU
@SphereofTime7 ай бұрын
25:45
@joeoh89892 жыл бұрын
So W = {W_0,W_1,W_2,...W_n}, these elements are themselves weights for each respective layer? Is it right to think of W as a set containing other sets? Where W_i in W is a list of weights for the ith layer? And W* is the combination of each weights in each element of W such that we have the lowest loss? Am I understanding that right? Great presentation. Thank you so much for this, even though it can be challenging it helps to see the mathematics behind it so the viewer can go search it up later.
@E_rich2 жыл бұрын
This is terrifying given the times tbh (war propaganda, etc.), hope we have a thorough way to distinguish between AI mimicry and reality...
@hasijahimanshu2 жыл бұрын
Awesome content, thanks for creating and keeping it free, unlike others :)
@yashrathi68622 жыл бұрын
Thanks for providing these great lectures! Are the assignments also available?
@SantoshJha19792 жыл бұрын
Hi, I am unable to download the slides, it says the GitHub page is non-operational. Could Someone help please?
@ashioyajotham2 жыл бұрын
Me too btw
@duoduo90582 жыл бұрын
I was just wondering when will this be out yesterday!
@abolfazlmohammadiseif284 Жыл бұрын
دمت گرم خیلی دوره ی خوبیه. درود بر تو.
@aman_singh2 жыл бұрын
Starting of course amazing
@saurabhchopra2 жыл бұрын
46:02 We randomly pick 50% neurons and set their activations to zero OR we set the activations to zero on neurons with a probability of 50%? I am a bit confused. Let's say we have 10 neurons, after 50% dropout, will we use only 5 neurons OR it can be any number of neurons?
@armin10482 жыл бұрын
According to the official tensorFlow documentation on "keras.layers.dropout", 50% of input units are dropped randomly. NOT "every input has a 50% chance of being dropped", which could lead to all of them, none of them, or any number in between being dropped
@paragon96712 жыл бұрын
I am super glad to follow this class, Thank you. I can't access the slides, I got a "404 File not found". please kindly help look into the link. Thank you.
@AAmini2 жыл бұрын
Thanks for letting me know, I'm working on fixing that ASAP and getting the slides published
@howardroth75242 жыл бұрын
Good lecture. Enjoyed it immensely.
@rahbar20022 жыл бұрын
nice lecture and relatable, cool teaching skills
@RichardTasgal2 жыл бұрын
The effect of the cascaded layers is to create more complex nonlinear functions than the simple activation functions that are used at each layer. Yes? I wonder if there a motivation for that sort of nonlinear function other than the fact that real-life biological neurons work approximately as a threshold function of a sum of inputs? Not that I have an objection, but why not improve on nature (if there is a way to do so)?
@bcinerd2 жыл бұрын
can't able to see lec1 slides , github page is not available please help!
@avoidprogress60022 жыл бұрын
All the non-linear activation functions (i think thats what they were called) are non-negative, in 2 cases there images are even contained in a compact interval. But what if I want my Network to output a Real number, unbounded, possibly negative?
@AAmini2 жыл бұрын
Great question, the solution is to simply remove your activation function from the last layer only (but keep on all other layers). This way your output can be unbounded.
@jayshan36452 жыл бұрын
It would be great if you provide an explanation of the notations used in the equations. ( what do' i' and 'j' represent?)
@williamgomez62262 жыл бұрын
Are you for real MIT????? I LOVE U
@uwakmfonutuk49392 жыл бұрын
I think transformers have become foundational to the field and should probably be added to the curriculum of this course. Maybe in the future?
@AAmini2 жыл бұрын
They will be a prominent part of the new lecture 2 (released next Friday) which is about time series modeling! Definitely check that one out once it premieres if you're interested!
@parvanehranjbar61132 жыл бұрын
Can you add Persian subtitle to your videos . It would be very helpful for Iranians and Persian speakers who watching you. Thank you .
@ghadeerelsalhawy Жыл бұрын
Thank you so much for the amazing lecture.
@osama824052 жыл бұрын
can't able to see lec1 slides on website.
@youssefyounes46402 жыл бұрын
i didn't understand the importance of the bias
@AAmini2 жыл бұрын
If all inputs (x) are zero, and we want our output (y) to be non-zero, then bias is the only way to accomplish that -- otherwise y=Wx will always equal zero as well.
@youssefyounes46402 жыл бұрын
@@AAmini Thanks
@davidniquot64232 жыл бұрын
I find that this lecture need to be optimized, it can be comrpessed in one quarter of the time easily. Some simple ideas are repeated 2 or 3 times... there is no need for that. For example about the learning rate ... just apply some dychotomy here and you'll have a kind of adaptive learning rate with no chance to be stuck.
@054siddarth32 жыл бұрын
Thank you for the course :D
@marioandresheviacavieres19232 жыл бұрын
Muchas Gracias!
@learning62102 жыл бұрын
thank you for the series :-)
@shashihnt2 жыл бұрын
Can one think of dropout as training a model with less number of parameters(more parameter leads to more complexity that in turn leads to overfitting) and these parameters changes during the training step ?
@RichardTasgal2 жыл бұрын
Why not use Newton-Raphson? OK, you need second derivatives, too, and you need to invert a matrix. But the illustrations show tens or hundreds of gradient decent steps, which probably adds up to more than the four data points you need to get first and second derivatives in each direction (and generalization to Hessian for more dimensions) and then use it to jump close to the minimum at each step. I know that they are just illustrations, but still... It's not like you are leaving the minimization/optimization to black box functions that tensorflow users are best advised to not bother getting into. I'm writing this without doing a decent calculation the computational cost. I hope I'm clear enough and not going down a line of thought that has already been proven not to be helpful.
@nkamganstanis1842 жыл бұрын
Hello sir, the lectures, slides, and lab materials are not yet available
@helloansuman2 жыл бұрын
Can we have the lab video?
@caiomar2 жыл бұрын
Let's go baby! (:
@DanielSerratto2 жыл бұрын
I’m curious about what is the meaning of code S191 in the MIT?