Wonderful explanation Misra. It's a simple concept but I have seen many people getting confused because of it. I am sure it will help many learners.
@deniz.7200 Жыл бұрын
The best explanation of this issue! Thank you very much Mısra!
@angelmcorrea1704 Жыл бұрын
Thank you so much, excellent explanation.
@noelleletoile89802 ай бұрын
Thanks, super helpful!!
@mmacaulay2 жыл бұрын
Hi Misra, Your videos are amazing and your explanations are usually very accessible. However, while the vanishing/ exploding gradient problem in NNs is a complex concept, I did unfortunately find your train of thought or explanation in this video confusing. Would it be possible to provide a another video on the vanishing/ exploding gradient problem? Many thanks.
@Sickkkkiddddd Жыл бұрын
Essentially, deeper networks increase the risk of wonky gradients because of the multiplicative effects of the chain rule during back-propagation. Gradients in earlier layers of the network will have diminishing/vanishing gradients which means their neurons will learn essentially nothing during backprop causing the network to take forever to train. In the reverse case, gradients in earlier layers will have exploding gradients which will ultimately destabilise the training process and produce inefficient/unreliable parameters.
@noelleletoile89802 ай бұрын
Her explanation was clear even to a neophyte neuroscientist
@bay-bicerdover Жыл бұрын
Dynamic Length Factorization Machines hakkinda youtube'da tek bir video var. Benim gibi cömezlere o makinenin isleyisini uygulamali bir videoda anlatirsaniz, makbule gecer.
@khyatipatni611711 ай бұрын
Hi Misra, I brought the notes for deep learning, I did not know it's just one time download, I downloaded that time and then lost it. How can I re-download it without paying again, Please help.
@misraturp11 ай бұрын
Hello, it is not a one-time download. Did you try the link sent to your email once again?