What is Vanishing/Exploding Gradients Problem in NNs

  Рет қаралды 8,209

Mısra Turp

Mısra Turp

Күн бұрын

Пікірлер: 12
@deniz.7200
@deniz.7200 Жыл бұрын
The best explanation of this issue! Thank you very much Mısra!
@ProgrammingCradle
@ProgrammingCradle 2 жыл бұрын
Wonderful explanation Misra. It's a simple concept but I have seen many people getting confused because of it. I am sure it will help many learners.
@noelleletoile8980
@noelleletoile8980 22 күн бұрын
Thanks, super helpful!!
@angelmcorrea1704
@angelmcorrea1704 10 ай бұрын
Thank you so much, excellent explanation.
@mmacaulay
@mmacaulay 2 жыл бұрын
Hi Misra, Your videos are amazing and your explanations are usually very accessible. However, while the vanishing/ exploding gradient problem in NNs is a complex concept, I did unfortunately find your train of thought or explanation in this video confusing. Would it be possible to provide a another video on the vanishing/ exploding gradient problem? Many thanks.
@Sickkkkiddddd
@Sickkkkiddddd Жыл бұрын
Essentially, deeper networks increase the risk of wonky gradients because of the multiplicative effects of the chain rule during back-propagation. Gradients in earlier layers of the network will have diminishing/vanishing gradients which means their neurons will learn essentially nothing during backprop causing the network to take forever to train. In the reverse case, gradients in earlier layers will have exploding gradients which will ultimately destabilise the training process and produce inefficient/unreliable parameters.
@noelleletoile8980
@noelleletoile8980 22 күн бұрын
Her explanation was clear even to a neophyte neuroscientist
@bay-bicerdover
@bay-bicerdover Жыл бұрын
Dynamic Length Factorization Machines hakkinda youtube'da tek bir video var. Benim gibi cömezlere o makinenin isleyisini uygulamali bir videoda anlatirsaniz, makbule gecer.
@khyatipatni6117
@khyatipatni6117 9 ай бұрын
Hi Misra, I brought the notes for deep learning, I did not know it's just one time download, I downloaded that time and then lost it. How can I re-download it without paying again, Please help.
@misraturp
@misraturp 9 ай бұрын
Hello, it is not a one-time download. Did you try the link sent to your email once again?
@bay-bicerdover
@bay-bicerdover Жыл бұрын
Sigmoid islevinin azami türev degeri 0.25
How to Choose the Correct Initializer for your Neural Network
3:53
How Does Batch Normalization Work
13:23
Mısra Turp
Рет қаралды 5 М.
КОГДА К БАТЕ ПРИШЕЛ ДРУГ😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 7 МЛН
Car Bubble vs Lamborghini
00:33
Stokes Twins
Рет қаралды 39 МЛН
Trick-or-Treating in a Rush. Part 2
00:37
Daniel LaBelle
Рет қаралды 44 МЛН
ЗНАЛИ? ТОЛЬКО ОАЭ 🤫
00:13
Сам себе сушист
Рет қаралды 4,2 МЛН
How (and Why) to Use Mini-Batches in Neural Networks
14:08
Mısra Turp
Рет қаралды 4,6 М.
Recurrent Neural Networks (RNNs), Clearly Explained!!!
16:37
StatQuest with Josh Starmer
Рет қаралды 585 М.
When Should You Use L1/L2 Regularization
8:19
Mısra Turp
Рет қаралды 8 М.
How to Solve Vanishing Gradients in Keras and Python
10:06
Mısra Turp
Рет қаралды 1,8 М.
Tensors for Neural Networks, Clearly Explained!!!
9:40
StatQuest with Josh Starmer
Рет қаралды 189 М.
AI, Machine Learning, Deep Learning and Generative AI Explained
10:01
IBM Technology
Рет қаралды 500 М.
КОГДА К БАТЕ ПРИШЕЛ ДРУГ😂#shorts
00:59
BATEK_OFFICIAL
Рет қаралды 7 МЛН