Backpropagation explained | Part 5 - What puts the "back" in backprop?

  Рет қаралды 35,610

deeplizard

deeplizard

Күн бұрын

Пікірлер: 205
@deeplizard
@deeplizard 6 жыл бұрын
Backpropagation explained | Part 1 - The intuition kzbin.info/www/bejne/jnaWnKWcaKiEotU Backpropagation explained | Part 2 - The mathematical notation kzbin.info/www/bejne/aJ62qqaIrZJkmZI Backpropagation explained | Part 3 - Mathematical observations kzbin.info/www/bejne/fWbFZZ2Id7CBrtk Backpropagation explained | Part 4 - Calculating the gradient kzbin.info/www/bejne/kKOYp5x3j6yhmqc Backpropagation explained | Part 5 - What puts the “back” in backprop? kzbin.info/www/bejne/rnTPfJKVeNaNpLM Machine Learning / Deep Learning Fundamentals playlist: kzbin.info/aero/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: kzbin.info/aero/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
@FerMJy
@FerMJy 6 жыл бұрын
awesome.. finally someone explained it... there are books.. expensive ones... that don't.... 1 advice... it would be... "easier" if you use.. 2 neurons per layer instead of 6 or more.... it's more simple to follow up in paper and visually(which is more important)
@hasantas7751
@hasantas7751 5 жыл бұрын
Aw now got it better thanks. I applied it with cross entropy and worked pretty well
@envy5944
@envy5944 6 ай бұрын
I swear to God this level of explanation surpass ANY university teaching standard. Like after 6 years, new people like me still amaze with the level of details and breakdowns through out all this heavy based maths.
@deeplizard
@deeplizard 6 ай бұрын
🙏🙏
@rohitjagannath5331
@rohitjagannath5331 6 жыл бұрын
My God!!! Hey dear tutor... U are so very amazing with your explanation. I went through all your backprop math and was really amazed by your presentation. U broke down the complexity into such little chunks explained it so very well. The final touch with that image of a man with a blast about the concept of backpropogation was amazing!!! Thank you very much. I need to practice this a lot many times to get a grip over it
@deeplizard
@deeplizard 6 жыл бұрын
You're very welcome, rohit! I'm glad you liked it and found the explanations helpful! Keep me posted once you get a full grip on all the math. There is nothing better than the _aha!_ moment you experience once the math fully clicks in your brain!
@AmrishKelkar
@AmrishKelkar 3 жыл бұрын
This 5-video series is just mindblowing. As a teacher, I can imagine it being painful having to repeat mathematical observations repeatedly, especially when there are like a million sub and superscripts involved. But every time, you state clearly what each term means. So kudos on that dedication. You've clearly put in the effort to write down the script and then present. Thank you for the series.
@deeplizard
@deeplizard 3 жыл бұрын
Happy to hear you notice and value the details :)
@arslanzahid4214
@arslanzahid4214 Ай бұрын
You know when a song is so good, and you dont even realize it that it's long cuz the song takes a grip of your attention. Well watching these videos is like that. Bravo. what an amazing explanation done here !!!
@deeplizard
@deeplizard 21 күн бұрын
Lovely analogy, thank you!
@davewesj
@davewesj 3 жыл бұрын
Of the many videos on the subject and web-pages, you are the only one to elucidate the Backprop error calculation for precursor layers. Thanks for your effort.
@mariodurndorfer6996
@mariodurndorfer6996 5 жыл бұрын
Watched the series twice. I think i got it now. As already mentioned, i like your style. Clear steps and you take the right amount of time for each one. Great work! Thanks!
@kushagrachaturvedy2821
@kushagrachaturvedy2821 4 жыл бұрын
These 5 videos on backpropogation are some of the best on KZbin.
@paulinevandevorst1514
@paulinevandevorst1514 2 жыл бұрын
THANK YOU! I'm writing a bachelor's thesis on deep learning for noise reduction in MRI images. This has helped me very much in understanding back propagation. The math I found in papers, seemed so difficult, that it was difficult to keep motivated. However, through this series of yours, I have discovered that the math really isn't that difficult, and that it is also really intuitive, once you grasp the notation. Good work! :)
@deeplizard
@deeplizard 2 жыл бұрын
Glad to hear it!
@marcfelsenstein2080
@marcfelsenstein2080 2 жыл бұрын
Hi there, I've recently started working with NN at work and the 'Deep Learning Fundamentals' playlist is helping me get up to speed so much! Thanks for that. Now, when speaking about the 5 Backpropagation videos - my derivatives are rusty. But even that being the case, you managed to explain the concepts on a higher level while also getting down to the needy greedy calculations, so I was able to follow even if I didn't understand absolutely every step of the way. Brilliantly explained. Thanks again!
@deeplizard
@deeplizard 2 жыл бұрын
Wonderful to hear, thanks for sharing!
@Hatesun-lz6fi
@Hatesun-lz6fi Жыл бұрын
Outstanding, clear and concise explanation. The idea of splitting this topic in 5 videos was very helpful to me. Thank you!
@xariskaragiannis1190
@xariskaragiannis1190 2 жыл бұрын
Best explanation for backpropagation out there! All i have to say is THANK YOU
@nurali2525
@nurali2525 3 жыл бұрын
Just in short - brilliant! So much work to make such series of videos, thank you very much!
@yoyomemory6825
@yoyomemory6825 3 жыл бұрын
BEst explanation of BP ever!!!! Thank you soooooooo much!
@aashwinsharma1859
@aashwinsharma1859 3 жыл бұрын
Understood backpropagation completely. Thnx a lot for the wonderful explanation.
@wgc9794
@wgc9794 3 жыл бұрын
This was great. Part 1 set it up nicely. Part 2 you made it so clear what each ingredient was. That was dry but completely essential as you drew us through all the required calculations. You were able to patiently step through them for us as straightforwardly as they possibly could be. My hat is off to you and I look forward to viewing other of your videos to learn more.
@GurpreetKakar4873
@GurpreetKakar4873 6 жыл бұрын
Great set of Videos. For anyone, not understanding, don't watch the videos only. First, take a notebook and write equations with it. Finally, I understood it completely, Thanks.
@deeplizard
@deeplizard 6 жыл бұрын
Glad to hear this, Gurpreet! It's always a rewarding feeling for me once new material finally "clicks" after I first learn it from another source and then write it out myself.
@ismailelabbassi7150
@ismailelabbassi7150 2 жыл бұрын
The greatest teacher of all time thank u so mu u're great keep going.
@bishwasapkota9621
@bishwasapkota9621 4 жыл бұрын
This is by far the best one for Backprop!!!! Congrats and thanks!
@SebastianHudert
@SebastianHudert 7 ай бұрын
Phew... really anything was already said in the other comments below but I just really wanted to also leave one: these videos are spot on the best educational content I have seen to this day. Period. And I have been working at a university myself and taught students ;-) Never seen anybody navigating mathematical equations in such a clear and understandable way.
@deeplizard
@deeplizard 7 ай бұрын
Thank you 🙏 Glad to hear that you appreciate the approach we take to teaching mathematical concepts!
@brendanmcgivern3540
@brendanmcgivern3540 6 жыл бұрын
This series on backpropagation is simply amazing! Arguably the best I have come across. You cover low-level topics as well as abstract away the details at times to communicate a high-level understanding using your visual models. Great work! One thing you could also add is a video covering some basic calculus. Most online courses generally provide the derivates for you and don't explain how functions can simply be derived using: (f(x+dx)-f(x))/dx. Or how the power rule, product rule, sum rule, chain rule etc.. can be used or why they work.
@deeplizard
@deeplizard 6 жыл бұрын
Thank you, Brendan! I'm really glad you enjoyed the series! Also, thanks for the calculus suggestion. Math is a huge passion of ours, and we intend to add math videos in the future. 😊
@samaryadav7208
@samaryadav7208 6 жыл бұрын
Thanks, I was waiting for this. Math was little difficult to understand but that's the fun part. I got it about 85%. I am gonna watch full series again all at once.
@deeplizard
@deeplizard 6 жыл бұрын
Awesome! I'm glad you were able to go through the series and get the majority of it all the first time around. The math, especially with all the notation, definitely takes a while to fully wrap your head around. For me, I like to write, and then re-write the math myself when learning from someone/something else before I usually feel like I have a good grip on it. Let me know when you finish the series for the second time if you feel like you've risen from 85% to 100%!
@GauravSingh-ku5xy
@GauravSingh-ku5xy 4 жыл бұрын
Thank you for existing.
@vinaychitrakathi9237
@vinaychitrakathi9237 3 жыл бұрын
Chris and Mandy hats off to your hardwork. 🙏
@abhirajsingh5428
@abhirajsingh5428 4 жыл бұрын
By far the best explanation about this concept on internet. Thank you ! :)
@DEEPAKSV99
@DEEPAKSV99 4 жыл бұрын
Got the exact same feeling at 12:56! No words, to thank you for the amount of social service you are doing to the tech community. I feel your work would have a major contribution to the AI development across the world, since a lot of young beginners like me are motivated into this field even more by your style of teaching. The amount of effort you put up in keeping your content short, precise and yet interesting seems incredible. . .
@deeplizard
@deeplizard 4 жыл бұрын
Thank you, Deepak!
@ayodejioseni2357
@ayodejioseni2357 3 жыл бұрын
Fantastic explanation of backpropagation. Thanks so much
@circuithead94
@circuithead94 3 жыл бұрын
Good series of explanations. I need to still rewatch them and get my hands dirty with a simple example to hammer it down. I know all the information is there in the videos so it's not like one of those lecture videos where you rewatch it in hopes of getting it lol. I usually like looking at the whole math at once, instead of small chunks at a time. But I definitely, like what you did. I did keep the notation video open on a separate screen. Again thanks a lot for these.
@caesarsalad44
@caesarsalad44 5 жыл бұрын
Cheers mate indeed. This 5-video series was very helpful in terms of explaining things slowly and concisely. Thank you deeplizard
@TheEpicPineapple56
@TheEpicPineapple56 4 жыл бұрын
This was a fantastic video explaining such a high-level concept in easy to grasp bite-sized pieces. This entire series has been excellent! Thank you for putting in the countless hours of editing and scripting and practicing to deliver this amazing content!!
@Sikuq
@Sikuq 4 жыл бұрын
Love your math in this 5-part backpropagation playlist 23 thru 27. Thanks.
@JarosawRewers
@JarosawRewers 5 жыл бұрын
Thank you very much for your series about backpropagation. You have a talent for explaining complicated things.
@irenez1321
@irenez1321 5 жыл бұрын
You are really a role model for all of us to aspire to! If someone can explain backprop as well as you, then it shows they have truly understood. My only feedback would be to go through the intuition a little more. Like, point to a few more nodes and explain how the derivatives depends on layers after it in the network. I know that would just be repetition, but for someone who is new to backprop, that sort of repetition might be useful!
@svssukesh1170
@svssukesh1170 4 жыл бұрын
This series is really good......the way you explained it is amazing....got a clear understanding on how the math under the backpropagation works....thank you
@mariaanson6537
@mariaanson6537 4 жыл бұрын
I'm really glad that I got a tutor like you Your explanation is really excellent This back-prop has been playing with my mind for a long time And now Thanks for you Looking forward to see you untie many of the black boxes in deep learning
@justing912
@justing912 3 жыл бұрын
Really, really clear and consistent explanation. You rule!
@PritishMishra
@PritishMishra 3 жыл бұрын
Amazing !! The all parts of Back prop series is just mind-blowing and understood all the Mathematics part
@sahibsingh1563
@sahibsingh1563 5 жыл бұрын
AWESOME SERIES FINALLY UNDERSTOOD BACKPROP BEST EXPLANATION :D
@Jackson_Leung
@Jackson_Leung 5 жыл бұрын
I am so grateful that I run across this video. I am working on a NN project, and this video just explain Backpropagation so well!
@arjunbakshi810
@arjunbakshi810 4 жыл бұрын
best explanation of back prop out there
@dinos3741
@dinos3741 4 жыл бұрын
Excellent approach and explanation! Especially the part 4 where you analyze the back propagation in the previous layers is very clear. Thanks!
@EGlobalKnowledge
@EGlobalKnowledge 6 жыл бұрын
Your series helped me to understand intuitively and mathematically. Thank you very much for your effort in getting these videos out.
@deeplizard
@deeplizard 6 жыл бұрын
You're very welcome, Ravisankar! I'm so glad to hear this! Thank you for taking the time to let me know.
@EliorBY
@EliorBY 4 жыл бұрын
wow. what an amazing illustrative mathematical explanation. just what I was looking for. thank you very much deep lizard!
@stackologycentral
@stackologycentral 4 жыл бұрын
Amazing explanation. Finally I fully understand backprop calculation. Keep these coming. :)
@Tobbse91
@Tobbse91 5 жыл бұрын
I watched the whole playlist for deep learning now because i have to learn that stuff for my exams. The first thing i have to say, i do not write a lot of comments on videos. But I am really impressed of your explanations for such complex topics in deep learning. I don't know anyone who can speak that clearly! You got real talent! You helped me trough this hard stuff. I will recommend your channel to my course. Best regards from germany! And thank you very much!
@deeplizard
@deeplizard 5 жыл бұрын
Thank you, Tobias! Glad you commented, and I'm happy to hear you're learning from the videos!
@badribaskaran8612
@badribaskaran8612 2 жыл бұрын
Just wanted to take some time off to let you know that your videos on explaining Backpropagation are really comprehensive. I think although your explanations are good, it's gonna take a couple more rewatch rounds to let it sink into my mind more easily. Good stuff so far. As a complete novice, I find that this series of videos is very good at helping me understand some fundamentals before getting into the hardcore programming stuff, and for that part, I'm planning to use DL with Python by Francois Chollet because I think textbooks work better for cementing my practical understanding.
@richarda1630
@richarda1630 3 жыл бұрын
quite a good effort on such a herculean problem, I think to fully appreciate this concept, people would need a good understanding of derivatives and basic calculus , imho. Love the Mind Blown ending :) will watch this again =)
@bhaveshshah9487
@bhaveshshah9487 5 жыл бұрын
Hi! This was amazing and speechless! This is the best explanation on Backprop I ever came across. Thank you so much for sharing your knowledge. I look forward for more content with Deep Learning, AI, CNN and so on. God Bless!
@scrap8660
@scrap8660 Жыл бұрын
You. Are. Awesome. Thank you SO MUCH for this!!!
@joaoramalho4107
@joaoramalho4107 3 жыл бұрын
Marvelous explanation. Wow! Might even consider mentioning who made me really understand deep learning in my master thesis
@gilbertnordhammar3677
@gilbertnordhammar3677 6 жыл бұрын
I don't think there's anyone who could have explained this more clearly than you just did! But still... Maaaan, this was INTENSE! I'm glad I made it through, though :D
@deeplizard
@deeplizard 6 жыл бұрын
I'm glad you did too! 👏 And thank you!
@timrault
@timrault 6 жыл бұрын
Thank you so much for this series of videos, it really helped me get a better understanding of the concept of backpropagation, especially the 'back' part of it ! Your explanations of the maths are really clear, and I like that you are very precise about all the notations
@deeplizard
@deeplizard 6 жыл бұрын
You're very welcome, Tim! Glad you found everything clear and helpful!
@Tschibibo
@Tschibibo 4 жыл бұрын
Thank you! Neven seen such a great and clear explanation on backpropagation!
@timothyorr6858
@timothyorr6858 4 жыл бұрын
Thanks for doing this excellent series. The backprop math was very detailed, clear and helpful.
@prabaa123
@prabaa123 4 ай бұрын
Great explanation, Thank you so much !!
@John-wx3zn
@John-wx3zn 6 ай бұрын
Hello Mandy. Thank you. I learned. The weight 1,2 comes from the 3rd weights vector position in layer L and this is the number on top of the arrow that points from node 2 to node 1. The arrow does not mean that weight 1,2 is flowing from node 2 in L-1 to node 1 in L. The flow of the activation function outputs are not being shown.
@gallo189
@gallo189 4 жыл бұрын
Nothing else to say than Thank you for all the videos, this shows the complexity of all, and also the humour on it with the guy with the mind blown :D
@floriandebrauwer9140
@floriandebrauwer9140 5 жыл бұрын
Super helpful serie, thanks for your work !
@muhammadtalhabaig4908
@muhammadtalhabaig4908 5 жыл бұрын
Hey!! this Backpropagation series was the best! Honestly, I did know the whole intuition behind what happens and that it updates weights and stuff but was really curious how it happens. This series has taught me just that and was super interesting! Now I finally know what goes on behind the scenes mathematically which we all should know! Thank you so much deep lizard, you are the best ❤️
@deeplizard
@deeplizard 5 жыл бұрын
Great to hear, Talha! Glad you gained some insight :D
@crazy-boy6143
@crazy-boy6143 2 жыл бұрын
Didn't quite get it yet, since I jumped right into this video to see if it had a solution to the problem on which I'm working. The math didn't seem so sophisticated, however, I'll reevaluate after seeing the videos about backpropagation. Thanks for the videos btw
@joruPT
@joruPT 6 жыл бұрын
I just wanted to thank you for uploading these videos, they've been a great help for me and I hope to see more in the future!
@deeplizard
@deeplizard 6 жыл бұрын
I'm glad to hear this, joruPT! You're welcome. Thanks for letting me know!
@vinaychitrakathi9237
@vinaychitrakathi9237 3 жыл бұрын
Thank you so much,🙂 deeplizard and team for this intuitive explanation 😊🙏
@thutoaphiri6989
@thutoaphiri6989 4 жыл бұрын
this series was really great - thank you very much for the clear explanation of the math terms and how they all fit into each other. the series does not touch on the maths which will be used for updating the weights in L-2 and that is actually what i wanted to see. i have an idea based in what you covered but i would like to be sure - can you kindly point me to another video/article you know of which builds to that level of depth? or can you pleeeeeeaaaaaase make another video where you do that
@MrStudent1978
@MrStudent1978 6 жыл бұрын
Thanks Miss! Your style for explanations is wonderful..very nice...
@panwong9624
@panwong9624 6 жыл бұрын
This is a very informative video. Because of your amazing and effective explanation on back-propagation, I now understand the math behind the calculation. Thanks! :)
@deeplizard
@deeplizard 6 жыл бұрын
Thank you, Pan! And you're welcome :) I'm really glad to hear that you now understand the math behind it!
@drshahidqamar
@drshahidqamar 6 жыл бұрын
Mind blowing explanation. You are really Deep Lizard. Good job. Thank you
@javiercmh
@javiercmh 2 жыл бұрын
this is great!! I am studying for my exam tomorrow. thank you!
@javiercmh
@javiercmh 2 жыл бұрын
I had an A+!! Thanks again
@deeplizard
@deeplizard 2 жыл бұрын
Congrats! 🥳
@abubakarali6399
@abubakarali6399 3 жыл бұрын
Why these youtubers not replace University professors. Can save much of our time.
@paraklesis2253
@paraklesis2253 5 жыл бұрын
You deserve more likes
@davidtemael1307
@davidtemael1307 6 жыл бұрын
Deeplizard you are such an awesome Wizard!
@transolve9726
@transolve9726 6 жыл бұрын
Agreed, she is pretty good at explaining things with the voice for it as well.
@GoredGored
@GoredGored 3 жыл бұрын
You put a lot of effort in creating this material. Thank you for that. I was hoping you will convert the intuition into python. Anyway, I am convinced you should be one of companions on my long and frustrating ML journey. Subscribed.
@Mzah14
@Mzah14 4 жыл бұрын
great explanation, thanks a lot
@AlessandroPepe
@AlessandroPepe 5 жыл бұрын
This is by far the best explanation of back prop I've found. Still, of course missing the bias parameter. It seems there's a unsaid rule in all the dozens of backprop videos I've seen so far : if it's really well explained, skip the bias. If it's kinda sloppy, yeah let's add the bias explanation. Still, kudos for the best out there so far.
@deeplizard
@deeplizard 5 жыл бұрын
Thanks, Alessandro! For more on bias, we have the video below that goes into all the details for bias specifically. During SGD, the bias terms are updated using the exact same methodology as the weights. Hope this provides a bit more clarity. kzbin.info/www/bejne/fpbXd5yeqL2Gr9U
@AlessandroPepe
@AlessandroPepe 5 жыл бұрын
@@deeplizard that's really helpful, thank you for sharing this precious info with us, you guys are amazing.
@vugarbayramov647
@vugarbayramov647 6 жыл бұрын
YOU ARE ONE GREAT PERSON!!!!!! I AM IN LOVE WITH YOUR STYLE:) . THANK YOU FOR YOUR EFFORTS AND TIME!!
@deeplizard
@deeplizard 6 жыл бұрын
Thank you, Vugar! I'm happy to hear you're enjoying the videos! 😊
@MaahirGupta
@MaahirGupta 3 жыл бұрын
Great series, really helpful!
@hussainbhavnagarwala2596
@hussainbhavnagarwala2596 Жыл бұрын
The video series was super helpful, could you do a solved problem with fewer nodes in each layer and code it step-wise, that would be super helpful
@tymothylim6550
@tymothylim6550 3 жыл бұрын
Thank you very much for this 5-part series! It was a fantastic explanation and I learnt a lot! I like the meme too xD
@paulbloemen7256
@paulbloemen7256 5 жыл бұрын
During the backpropagation process, for any neuron in the hidden layers and the output layer one has the following information available: - The notion of the loss, expressed in a certain number, say -0,45; - The activation value of that neuron, say 0,65; - The activation value of a neuron in the previous layer, say 0,35; - The weight of the activation value of that same neuron in the previous layer, say 0,55; - The bias, say 0,75. Knowing these five values, and maybe using some or all of them: - How is the value of the weight, that is now 0,55, modified? - How is the value of the bias, that is now 0,75, modified? - Is the value of the learning rate, or step size, that is used in both modifications influenced by any of the values mentioned here, like, when these values become very big or very small? I truly would appreciate an answer to these questions, thank you very much!
@deeplizard
@deeplizard 5 жыл бұрын
Hey Paul - Check out the earlier video/blog on the learning rate to this series. There, I explain how the weights are updated. deeplizard.com/learn/video/jWT-AX9677k Bias terms are updated in the same fashion. More on bias here: deeplizard.com/learn/video/HetFihsXSys In regards to the learning rate, the next video explains the vanishing/exploding gradient problem that can happen when the weights become very small/big, and the learning rate plays a role in this problem. In general, the lr alone is not necessarily influenced by the weights or biases, but there are techniques that you can use (in addition to weight initialization), like steadily modifying your lr during training. deeplizard.com/learn/video/qO_NLVjD6zE
@paulbloemen7256
@paulbloemen7256 5 жыл бұрын
deeplizard Thank you for your prompt answer! I saw all the videos the past few days to eat myself into the subject, I somehow missed the point of the video on the learning rate. Just to be sure, assuming a learning rate of 0,004 The new weight will be 0,55 - (0,45 / 0,55) * 0,004 = 0,5467 . The new bias will be 0,75 - (0,45 / 0,75) * 0,004 = 0,7476 . Are these numbers realistic? The modifications are really small, that would mean quite some steps are needed, like even if you happen to have to reach 0,5 and 0,7 for those two parameters in some straight way. Thus, I just wonder: if so many steps are needed anyway, why trust one simple formula to the hilt that doesn't seem to perform that welł? I guess I am touching the problem of vanishing and exploding gradients here. I would say, whatever the formula yields, make sure there is a minimum modification, against the vanishing gradient problem, and a maximum modification, against the exploding gradient problem. During the whole test process one would expect that the weight is nearing its definitive value, as the loss is nearing its minimum value. Here one would expect less of a risk of an exploding gradient, and a need for smaller modifications to reach the optimal value of the weight. The maximum value of the modification will not hurt anyway, and the minimum modification can be lowered steadily: this minimum modification could be a function of the decrease of the loss when compared to the previous test, or... of the amount of tests that are performed. Well, the last paragraph may be a bit of nonsense, or?
@simonty1811
@simonty1811 3 жыл бұрын
this was better than my masters course
@EnvoyOfFabulousness
@EnvoyOfFabulousness 5 жыл бұрын
Hey deeplizard! I posted 6 days ago and since then I've managed to successfully implement a neural network in java capable of learning with backpropagation, while not using any pre-existing libraries. I'm pretty pleased I was able to do so, and feel I have a good understanding of these ideas and algorithms. So first thank you for this excellent series. I wanted to ask a question to clarify something, though. The network I created for testing had 3 layers: the input, one hidden, and the output layer. As you explained, when calculating the derivative of the error with respect to any given weight in the L'th layer, we have those three main terms we need to solve for and multiply together (I just simply labeled them Term 1, 2 and 3). Similarly, if we were taking a given weight in the other layer (L-1), we also have three terms. Two of those terms for the calculation for the node in the L-1'th layer will be solved the same way as for the L'th layer. Also, any change in the activation of a node in layer L-1 affects the nodes in L, so we sum up these effects. So here's my question: suppose I introduced a second hidden layer, L-2. Any changes to the activation of a node in L-2 should affect L-1, so I should again sum up these affects, right? Also, changes in L-2 affect L-1, which then affect L, correct? So when changing a layer beyond L-1, does the change have a sort of recursive, exponential effect for each additional layer deep you go? Putting it in some pseudo-code: When updating L: For every node in L: Calculate Term 1 Calculate Term 2 Calculate Term 3 derivative_rate = Term 1 * 2 * 3 Update Weight in L based on the derivative_rate When updating L-1: For every node in L-1: Create Term 1 Calculate Term 2 Calculate Term 3 For every node in L: ( Sigma ) { Calculate Term 1 a Calculate Term 2a Calculate Term 3 a derivative_rate_a = Term 1a * 2a * 3a Add derivative_rate_a to Term 1 } derivative_rate = Term 1 * 2 * 3 Update Weight in L-1 based on the derivative_rate Now, if I were to update L-2: For every node in L-2: Create Term 1 Calculate Term 2 Calculate Term 3 // from here is where I start to be unsure For every node in L-1: ( Sigma ) { Create Term 1b Calculate Term 2b Calculate Term 3 b For every node in L: ( Sigma ) { Calculate Term 1 a Calculate Term 2a Calculate Term 3 a derivative_rate_a = Term 1a * 2a * 3a Add derivative_rate_a to Term 1b } derivative_rate_b = Term 1b * 2b * 3b Add derivative_rate_b to Term 1 } derivative_rate = Term 1 * 2 * 3 Update Weight in L-2 based on the derivative_rate I would write that out with mathematical terms and sigmas but I don't know how well that would translate in text. Mainly I'm wondering if I'm on the right track with my thinking that this becomes recursive, or exponential in nature as you attempt to update deeper and deeper into the network, or if I am over-thinking this. Many Thanks!
@KatarDan
@KatarDan 6 жыл бұрын
Hey, that was amazing. I don’t even know the chain rule and partial derivatives but it was quite intuitive thanks to explanations.
@deeplizard
@deeplizard 6 жыл бұрын
Thanks for letting me know, Dmitry! Glad it was easy to follow intuitively even without the calculus background.
@s25412
@s25412 3 жыл бұрын
10:59 seems like 'n' represents both the number of nodes as well as the number of training samples. Recommend you use 'm' for the latter to avoid confusion as it may cause problems when implementing.
@vikrambharadwaj6349
@vikrambharadwaj6349 5 жыл бұрын
Why aren't you at a million subs already? :D Btw, thanks for the explanation!
@timothyjulian6817
@timothyjulian6817 3 жыл бұрын
This is amazing!
@acyutanand
@acyutanand 5 ай бұрын
Well i got that part without having to go through these videos since I have some good maths background. But thanks for the revision.
@DavidCH12345
@DavidCH12345 5 жыл бұрын
Your explenations are awesome! And they are in a debth where I can actually use them for my studies. Great work! keep it up :-)
@furkanfiratli7908
@furkanfiratli7908 2 жыл бұрын
it was amazing. very helpful! thank you so much!
@pratiklohia1
@pratiklohia1 4 жыл бұрын
These are superb. Just one request. Can you help with the nesting diagram of the math equations to be able to wrap around the whole idea?
@hasantas7751
@hasantas7751 5 жыл бұрын
Everything is very nice. Thank you. But what about changing weights of L-2. I am stuck with that.
@bhaskar_iith
@bhaskar_iith Жыл бұрын
Excellent videos
@hasantas7751
@hasantas7751 5 жыл бұрын
Icredible. Delightful. I am amazed.
@apilny2
@apilny2 5 жыл бұрын
These are really great! Thanks so much for uploading these. Cheers :-)
@gourabsarker5491
@gourabsarker5491 3 жыл бұрын
Great One!
@guogeorge7956
@guogeorge7956 4 жыл бұрын
Make complex things simple. Thanks.
@kmnm9463
@kmnm9463 3 жыл бұрын
Hi , Very good video on Backpropagation. I also feel that to get a firmer grip on the whole concept, apart from the notations, it would be very effective if we can take a real dataset for a classification problem, build the model and test for weight of neurons and check for the output prediction. Also we can do the partial derivatives for the loss function ( we can take cross entropy or any other suited for classification ) and check. Could you just make a video to this effect? It would be great. Thanks KM
@ehabgaber9388
@ehabgaber9388 5 жыл бұрын
really brilliant
@1sankey2
@1sankey2 4 жыл бұрын
hey, will it be possible for you to upload one small problem explaining all the steps of backpropagation? it will be great to see the actual calculation during each step. Thank you
@shakyasarkar7143
@shakyasarkar7143 4 жыл бұрын
Understood totally for that corresponding weight discussed by you. Can you please tell what will be the gradient of the loss with respect to any weight in the first layer? How will the formula change? Like we need the weights connecting the particular required weight in L-2 (i.e layer 2) to all the nodes in L-1 layer and also ALL THE activation outouts of the layer L-1 (i.e. layer 3) connecting with each and every nodes of L layer as also ALL THE WEIGHTS OF THE LAST LAYER. Right?
@ramakrishnandurairaj9386
@ramakrishnandurairaj9386 4 жыл бұрын
Awesome from India tamilnadu
@evolvingit
@evolvingit 5 жыл бұрын
awesome explanation!!!!
Backpropagation explained | Part 4 - Calculating the gradient
14:26
Trick-or-Treating in a Rush. Part 2
00:37
Daniel LaBelle
Рет қаралды 45 МЛН
МЕНЯ УКУСИЛ ПАУК #shorts
00:23
Паша Осадчий
Рет қаралды 3,8 МЛН
小丑揭穿坏人的阴谋 #小丑 #天使 #shorts
00:35
好人小丑
Рет қаралды 53 МЛН
Кто круче, как думаешь?
00:44
МЯТНАЯ ФАНТА
Рет қаралды 4,3 МЛН
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
Backpropagation explained | Part 1 - The intuition
10:56
deeplizard
Рет қаралды 116 М.
The Strange Physics Principle That Shapes Reality
32:44
Veritasium
Рет қаралды 6 МЛН
The Quest To Make Unbreakable Glass
22:23
Veritasium
Рет қаралды 2,6 МЛН
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 397 М.
Trick-or-Treating in a Rush. Part 2
00:37
Daniel LaBelle
Рет қаралды 45 МЛН