If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
@shreypatel93792 жыл бұрын
These are some really good videos. You are unlike all of the indian 'youtube teachers' who target the mundane, less inquisitive audience. Don't stop in making such intuitive videos and maybe who knows you'll be the pioneer of such teaching wave in our country
@MachineLearningWithJay2 жыл бұрын
Wow… thank you so much for these kind words. I am glad to see that my content is helping people!
@kaustubhkapare8072 жыл бұрын
I think there's one mistake in this video at 3:40 the value of V3 where V2 is substituted....But a really great explanation..!!!
@MachineLearningWithJay2 жыл бұрын
Yea… thanks for letting me know
@Lucas-iy2yr8 ай бұрын
I was struggling a lot with this concept in Andrew Ng's Deep Learning specialization. Thank you so much, this is incredibly well explained! You've gained a new subscriber
@supanutsookkho27497 ай бұрын
I think i'm in the same situation as you. It's really confusing to understand and link the concept here to neural networks. All the best for the rest of the course.
@Lucas-iy2yr7 ай бұрын
@@supanutsookkho2749 Yep, the 2nd course of the specialization is kinda confusing and out of context. Still an incredible specialization and super clear when it comes to NN and gaining intuitions about them, but the code-labs and this part in particular are not that great.
@vladimirbosinceanu57782 жыл бұрын
Very nice job! I needed a second source do understand this concept and this really helped me. Liked and subscribed. Keep em coming and thank you!
@MachineLearningWithJay2 жыл бұрын
Glad to hear 😊
@DhruvBohara-ct4ye Жыл бұрын
Crisp and to the point content! Loved it !!
@vinayakpevekar2 ай бұрын
Thanks
@wailmohamed3552 ай бұрын
Thanks for the explanation. The division by (1-B^t) makes v1 = theta1 which is logical to make the average equivalent to the first data point when t=1. However, this division might magnify the average if there are few data points, because (1-B^t) will be small when t is small.
@arjungoud34502 жыл бұрын
Simple & crisp, thank you.
@MachineLearningWithJay2 жыл бұрын
😇😇
@ZirothTech2 жыл бұрын
Great video! Very clear and easy to follow
@MachineLearningWithJay2 жыл бұрын
🙂😇
@aritahalder93972 жыл бұрын
great video, in the 1st 3 mins itself I could grasp the concept..
@sreenivaskrishna73515 ай бұрын
Fantastic explanation, thank you!
@omarallam4548 Жыл бұрын
Great explanation ♥ Thanks
@MachineLearningWithJay Жыл бұрын
Thanks!
@rajeshpamarthi308 Жыл бұрын
This helped me to understand, thank you
@MachineLearningWithJay Жыл бұрын
Glad it helped!
@danishmentalitysaminos30412 жыл бұрын
Very well explained !
@viktoraghajanyan84773 жыл бұрын
Very good explanation
@MachineLearningWithJay3 жыл бұрын
Thank you so much!
@feedtowin13093 жыл бұрын
Thank you so much for your video!
@MachineLearningWithJay3 жыл бұрын
Your welcome!
@sandeepkomalpothu442 жыл бұрын
hey jay !! could you please upload the pdfs of Exponentially weighted Average, Momentum and Rmsprop
@MachineLearningWithJay2 жыл бұрын
Hi… I will upload the pdfs and let you know.
@freeNode5Ай бұрын
But what is θ? Is it for scale?
@MachineLearningWithJayАй бұрын
@@freeNode5 Hi, theta is the learnable parameter (eg weights or bias). You can watch my video on linear regression to better understand this.