Gated Recurrent Unit | GRU | Explained in detail

  Рет қаралды 3,116

Learn With Jay

Learn With Jay

Күн бұрын

Пікірлер: 34
@AshwinMaccount
@AshwinMaccount 4 ай бұрын
we missed u so much bro. Now you're back , please dont stop making these content. It will be so much helpful to me. I've watched all your previous videos. I learnt deep learning from your videos only bro.
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Thank you so much for writing this. Means a lot to me!
@draco.the.voyager
@draco.the.voyager 2 күн бұрын
I'm doing my master in Data science and your videos are my first to watch in any topic related to ML Thanks a lot bro!
@MachineLearningWithJay
@MachineLearningWithJay 2 күн бұрын
@@draco.the.voyager flattered to hear this. Thank you so much! Wish you luck for your masters
@Aditya-ri7em
@Aditya-ri7em 4 ай бұрын
bro i just conmpleted your cnn playlist , You did a great job in it man
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Thank you so much!
@ingenuity8886
@ingenuity8886 4 ай бұрын
So nice to see you back. And I am gonna be honest I needed a LSTM video so much and then saw your recommendation. I seriously thought you left youtube to pursue something else. Welcome back......
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Hey, thank you so much! Yeah, I got busy with my job and work, so couldn't upload videos. Now I have started working on more videos. Hope you will find those valuable too.
@chakri6262
@chakri6262 4 ай бұрын
keep doing these videos upto transformers bro. It is very helpful. Great work
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Yes, that’s the plan. Thank you!
@mr.unknown6179
@mr.unknown6179 4 ай бұрын
Thanks bro for all the videos.
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Glad you like them!
@dheemanth_bhat
@dheemanth_bhat 4 ай бұрын
awsm bro u r bak... thanks for bringing back ur valuable content
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Hey thanks a lot for keeping me motivated. Really appreciate 😄
@susrandomguy
@susrandomguy 4 ай бұрын
Great work.
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Hey, thank you so much!
@adityaghai220
@adityaghai220 4 ай бұрын
great that u are back !! I would love to know what u have been up to , like career wise ?
@MachineLearningWithJay
@MachineLearningWithJay 3 ай бұрын
Hi Aditya... I am working as a software developer since last 2 years. Moved to a different city (Pune). So needed some time to adjust to the work and the city... made new friends, learned new skills. Now I can finally start working on videos again :) How are you? Care to share where are you from? What you doing?
@adityaghai220
@adityaghai220 3 ай бұрын
@@MachineLearningWithJay Thank you so much for replying , means a lot , I am Aditya Ghai from 3rd IIIT Jabalpur CSE . I really enjoy DL/ML Stuff and your videos are really great!
@Ramu9119
@Ramu9119 4 ай бұрын
awesome video man keep it up man
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Thank you so much!
@Ramu9119
@Ramu9119 4 ай бұрын
@@MachineLearningWithJay hey do you also have plans to upload videos related to Transformers, Attention is all you need and more of Gen AI stuff
@shivamtiwari8106
@shivamtiwari8106 Күн бұрын
Can you make a series on autoencoder? I go through all your videos, and here I request it.
@rahulrajeev9763
@rahulrajeev9763 4 ай бұрын
Hello Jay, where were you all these time? I've trying to find your social profiles.
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Hi Rahul, apologies. I got busy with my job and work for past 2 years. Now I am working on this again and will upload more videos. Thanks for the support, really appreciate.
@dipanbanik6569
@dipanbanik6569 22 күн бұрын
Great explanation bro. Only one doubt I have, at 03:58 while computing the gradient we are doing backpropagation through time. In the summation, the terms with higher T values, the gradient vanishes due to repeated multiplication of weight matrix but the terms with smaller T values do not tend to 0, making the sum of the subgradients nonzero. So, how the gradient del(L)/del(w) vanishes here if all the subgradient sums dont vanish simultaneously?
@yashgoplani5632
@yashgoplani5632 4 ай бұрын
Woohoo 🎉
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
😁😁
@shivamtiwari8106
@shivamtiwari8106 12 күн бұрын
Bro can you please make a project on speech enhancement using the ann please
@basab4797
@basab4797 4 ай бұрын
What happened to you man...we missed you a lot
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Hey... yeah I got busy with my job, so couldn't make any videos. Thank you so much for all the support. It really means the world to me.
@pranav3918
@pranav3918 4 ай бұрын
Bro can you make videos on transformer model🥺🥺
@MachineLearningWithJay
@MachineLearningWithJay 4 ай бұрын
Yess, for sure. On it!! Only thing is I can work on my videos over weekends only, as I don't find any time during weekdays. Though, the videos will start coming in soon now.
Bidirectional RNN | Deep Learning | In-depth Explanation & Equations
7:33
LSTM Recurrent Neural Network (RNN) | Explained in Detail
19:32
Learn With Jay
Рет қаралды 67 М.
This Game Is Wild...
00:19
MrBeast
Рет қаралды 157 МЛН
Hoodie gets wicked makeover! 😲
00:47
Justin Flom
Рет қаралды 134 МЛН
Transformers in Deep Learning | Introduction to Transformers
21:09
Learn With Jay
Рет қаралды 1,9 М.
[GRU] Applying and Understanding Gated Recurrent Unit in Python
21:11
Self Attention in Transformers | Transformers in Deep Learning
43:48
Illustrated Guide to LSTM's and GRU's: A step by step explanation
11:18
Emoji Prediction | LSTM in Tensorflow | Implementation
27:51
Learn With Jay
Рет қаралды 9 М.
LSTMs and GRUs
29:15
NPTEL-NOC IITM
Рет қаралды 8 М.
Fool-proof RNN explanation | What are RNNs, how do they work?
16:05