Hello Sir, I think there is mistake in this video for backpropagation. Basically to find out (del L)/(del (w11^2)), we don't need the PLUS part. Since here O22 doesn't depend on w11^2. Please look into that. The PLUS part will be needed while calculating (del L)/(del (w11^1)), there O21 & O22 both depend on O11 and O11 depends on w11^1.
@alinawaz81472 жыл бұрын
Yes brother there is mistake what is said is correct
@prakharagrawal40112 жыл бұрын
Yes, This is correct. Thank you for pointing this out.
@aaryankangte67342 жыл бұрын
true that
@vegeta1712 жыл бұрын
You are correct concerning that, but I think he wanted to take derivative w.r.t O11 since it is present in both nodes of f21 and f22, so if we replace w11^2 in the equation by O11 the equation would be correct
@byiringirooscar3212 жыл бұрын
it took me time to understand it but now I got the point thanks man but I can assure you that @krish naik is the first professor I have
@ksoftqatutorials92515 жыл бұрын
I don't want to calulate Loss function to your videos and no need to propagate the video back and forward i.e you explained in such a easiest way I have ever seen in others. Keep doing more and looking forward to learn more from you. Thanks a ton.
@tarun4705 Жыл бұрын
This is the most clear mathematical explanation I have ever seen till now.
@moksh5743 Жыл бұрын
kzbin.info/www/bejne/f6nPZKGvoLB6b68
@AmitYadav-ig8yt5 жыл бұрын
It has been years since I had solved any mathematics question paper or looked at mathematics book. But the way you explained was damn good than Ph.D. holder professors at the University. I did not feel my away from mathematics at all. LoL- I do not understand my professors but understand you perfectly
@RomeshBorawake3 жыл бұрын
Thank you for the perfect DL Playlist to learn, wanted to highlight a change to make it 100% useful (Already at 99.99%), 13:04 - For Every Epoch, the Loss Decreases adjusting according to the Global Minima.
@vishnukce Жыл бұрын
But for negative slopes loss has to increase know to reach global maxima
@being_aadarsh4 ай бұрын
@@vishnukce For negative slopes weights need to be increased instead of a loss
@OMPRAKASH-uz8jw Жыл бұрын
you are no one but the perfect teacher,keep on adding playlist
@ganeshvhatkar904010 ай бұрын
one of the best videos, I have seen in my life!!
@namyashah31735 ай бұрын
No one has ever explained like you did.hatts off!!
@VVV-wx3ui5 жыл бұрын
This is simply yet Superbly explained. When I learnt earlier, it stopped at Back Propagation. Now, learnt what is in Backpropagation that makes the Weights updation in an appropriate way, i.e., Chain rule. Thanks much for giving clarity that is easy to understand. Superb.
@rajeeevranjan69915 жыл бұрын
simply one word "Great"
@manateluguabbaiinuk-mahanu7612 жыл бұрын
Deep Learning Playlist concepts are very clear and anyone can understand easily. Really have to appreciate your efforts 👏🙏
@aj_actuarial_ca Жыл бұрын
Your videos are really helping me to learn Machine learning as an actuarial student who is from a pure commerce/ finance background
@TheMainClip-t1h3 жыл бұрын
You have saved my life, i owe you everything
@VIKASPATEL-of2sy5 жыл бұрын
i guess differentiation done at 11:26 is bit wrong, r u sure about? i mean why do we have to addan extra term of delta loss by delta w12
@debasispatra83685 жыл бұрын
yes correct. It seems a mistake. addition part will come when we will calculate derivative of w11 for layer 1, not for derivative of w11 for layer 2.
@RajatSharma-ct6ie4 жыл бұрын
Yes you are correct !!
@bhavyaparikh69334 жыл бұрын
@@debasispatra8368 but why we dont have to add for layer 2 and add to layer 1
@mranaljadhav82594 жыл бұрын
@@bhavyaparikh6933 same question here....if you got it, can you explain.. I have just started deep learning.
@nikitlune95264 жыл бұрын
@@debasispatra8368 Hi, can you just tell how initially weights are assign and how many hidden layers and no. of neurons on each layer should be there?
@nishitnishikant85483 жыл бұрын
Of the two connections from f11 to the second hidden layer, w11^2 is affecting only f21 and not f22(as it affected by w21^2). So, dL/dw11^2 will only have one term instead of two. Anyone, pls correct me if i am wrong.
@sahilvohra88923 жыл бұрын
I agree. i dont know why others didn't realized this same mistake!!!
@mustaphaelammari11283 жыл бұрын
i agree, i was looking for someone has the same remark :)
@ismailhossain51143 жыл бұрын
That's the point I am actually looking
@saqueebabdullah91423 жыл бұрын
Exactly, cause if I solve the derivative of two terms it results d/dw11^2 *L = d/dw11^2 *L + d/dw12^2 *L , which is wrong
@RUBAYATKHAN893 жыл бұрын
Absolutely.
@shaan25224 ай бұрын
great explanation of the chain rule in backpropagation.. all my doubts are cleared!! thankss
@saritagautam93284 жыл бұрын
This is really cool. First time samjh aaya. Hats off Man.
@varunsharma1331 Жыл бұрын
Great explanation. I was looking for this clarity since long...
@adityashewale7983 Жыл бұрын
hats off to you sir,Your explanation is top level, THnak you so much for guiding us...
@abhishek-shrm4 жыл бұрын
This video explained everything I needed to know about backpropagation. Great video sir.
@mranaljadhav82594 жыл бұрын
Well Explained sir ! Before starting the deep learning, I have decided to start the learning from your videos. You explain in very simple way ...Anyone can understand from your video. Keep it up Sir :)
@hashimhafeez213 жыл бұрын
first time i undestand very well by your explanation.
@shrutiiyer683 жыл бұрын
Thank you so much for all your efforts to give such an easy explanation🙏
@mohammedsaif39224 жыл бұрын
Krish your awesome finally I understood the chain rule from you thanks Krish again
@manikosuru57125 жыл бұрын
Amazing Videos...Only one word to say "Fan"
@ruchikalalit13045 жыл бұрын
@ 10:28 - 11:22 krish do we need both the paths to get added . since w11 suffix 2 is not affected by lower path ie w12 suffix 2? please tell
@amit_sinha5 жыл бұрын
The second part of the summation should not come in the picture as it will come only when we will be calculating (dL/dw12) with suffix as 2.
@SiMsIMs-14 жыл бұрын
@@amit_sinha i think that is correct.
@niteshhebbare33394 жыл бұрын
@@amit_sinha Yes I have the same doubt!
@vishaldas63464 жыл бұрын
Not required, its not correct as w11^2 is not affected by lower weights. The 1st part is correct and summation is required , when we are thinking about w11^1.
@grownupgaming3 жыл бұрын
@@vishaldas6346 Yes!
@someshanand17994 жыл бұрын
great video especially you are giving the concept behind it, love it.. thank you for sharing with us.
@aditideepak80334 жыл бұрын
You have explained it very well. Thanks a lot!
@kamranshabbir27345 жыл бұрын
the last partial derivative of Loss we have calculated w.r.t. (w11^2) is that correct how we have shown there that it is dependent upon two paths one w11^2 and other w12^2 ......... Please make it clear i am confused about it ??????
@wakeupps5 жыл бұрын
I think this is wrong! Maybe he wanted to discuss about the w11^1? However, a forth term should be add in the sum. Idk
@imranuddin55265 жыл бұрын
@@wakeupps yes, i think he got confused and it was w11^1
@Ip_man225 жыл бұрын
assume he is explaining about W11^1 and youll understand everything. From the diagram itself, you can see the connections and can clearly imagine which weights are dependent on each other . Hope this helps
@akrsrivastava4 жыл бұрын
Yes, he should not have added the second term in the summation.
@gouravdidwania10703 жыл бұрын
@@akrsrivastava Correct no second term needed for W11^2
@MrityunjayD4 жыл бұрын
Really appreciable the way you taught Chain rule...awesome..
@chartinger5 жыл бұрын
OP... Nice Teaching... Why don't we get teachers like u in every institute and college??
@deepaktiwari98543 жыл бұрын
Nice informative video. It helped me in understanding the concept. But i think at end there is a mistake. You should not add the other path to calculate the derivative for W11^2. Addition should be done if we are calculating the derivative for O11. w11^2(new) = (dl/dO31 * dO31/dO21 * dO21/dW11^2)
@grownupgaming3 жыл бұрын
Yes deepak, I noticed the same thing. There's a mistake around 12:21. no addition is needed.
@anupampurkait60663 жыл бұрын
yes deepak you are correct. I also think the same.
@albertmichaelofficial8144 Жыл бұрын
Is that because we are calculating based on o3 and 03 depends on both output from second layer
@uddalakmitra10843 жыл бұрын
Excellent presentation Krish Sir .. You are great
@channel8048 Жыл бұрын
Thank you so much for this! You are a good teacher
@sundara25574 жыл бұрын
I am going through tour videos. You are Rocking Bro.
@sundara25574 жыл бұрын
Your*
@punyanaik525 жыл бұрын
Bro, there is a correction needed in this video... watch out for last 3 mins and correct the mistake. Thanks for your efforts
@aaryamansharma68054 жыл бұрын
your right
@ZaChaudhry Жыл бұрын
❤. God bless you, Sir.
@tanvirantu66234 жыл бұрын
love you sir, love ur effort. love from Bangladesh.
@hokapokas5 жыл бұрын
Loved it man... Great effort in explaining the maths behind it and chain rule. Pls make a video on its implementation soon. as usual great work.. Looking forward for the videos. Cheers
@shivamjalotra79195 жыл бұрын
Hello Sunny, I myself have stitched an absolutely brilliant repository explaining all the implementation details behind an ANN. See this: github.com/jalotra/Neural_Network_From_Scratch
@kshitijzutshi3 жыл бұрын
@@shivamjalotra7919 Great effort. Starred it. ⭐👍🏼
@shivamjalotra79193 жыл бұрын
@@kshitijzutshi try to implement it yourself from scratch. See george hotz twitch stream for this.
@kshitijzutshi3 жыл бұрын
@@shivamjalotra7919 Any recommendation for understanding image segmentation problem using CNN? resources?
@manjunath.c29445 жыл бұрын
clearly understood very much appreciated for your effort :)
@skviknesh4 жыл бұрын
Thanks ! That was really awesome.
@good1142 жыл бұрын
Thank you Sir 🙏🙏🙏🙏♥️☺️♥️
@dnakhawa4 жыл бұрын
You are too Good Krish , nice Data science content
@chandanbp4 жыл бұрын
Great stuff for free. Kudos to you and your channel
@devgak73674 жыл бұрын
Just awsome explanation of gradient descent.
@gunjanagrawal86262 жыл бұрын
Could you please recheck the video at around 11:00, W11 weight updation should be independent of W12.
@SiMsIMs-14 жыл бұрын
Awesome Mate. however, I think you got carried away for the second part to be added. read the comments below and correct, please. W12 may not need to be added. But it all makes sense. A very good explanation.
@mohamedanasselyamani43234 жыл бұрын
Same remark concerning W12, good job Krish Naik and thank you for your efforts
@ravikumarhaligode29493 жыл бұрын
Hi Both, I also have same query
@vishalshukla2happy5 жыл бұрын
Great way to explain man.... keep on going
@kavinvignesh28325 ай бұрын
for the dL/w11^3 it should be dL/w11^3 = (dL/dO31 * dO31/dO31(before activation) * dO31(before activation)/dW11^3) right?
@aminzaiwardak67505 жыл бұрын
thank you sir, you explain very good keep it up.
@grownupgaming3 жыл бұрын
Isnt the dL/dw2-11 independent of dL/dw2-12? At 12:21 why is dL/dw2-11 those two terms added up? dL/dw2-11 is the first line of additions, and dL/dw2-12 is the second line of additions.
@yedukondaluannangi73514 жыл бұрын
Thanks a lot for the videos it helped me a lot
@rajshekharrakshit90584 жыл бұрын
sir i think one thing you are doing is worng. as w^(3)11 impacts O(31) , here is one activation part. so the dL/dw^(3)11 = dL/dO(31) . d0(31)/df1 . df1/dw^(3)11 I might be wrong, can you please clear my query ?
@sekharpink5 жыл бұрын
Very very good explanation..very much understandable. Can I know how many days ur planning to complete this entire playlist?
@arpitdas25304 жыл бұрын
Your teaching is great sir. But can we get some video also about how we will apply these practically in python?
@mdmuqtadirfuad11 ай бұрын
I can't understand( 11:09) dL/dw^2_11= 1st term + 2nd term... We are updating w11. But how w12 make impact (2nd term)?
@viveksm8633 жыл бұрын
Im able to understand the concepts you are explaining, but I dont know that from where do we get values for weights in forward propgation.Could you brief about that once if possible.
@sekharpink5 жыл бұрын
Hi Krish, Please upload videos on regular basis. I'm eagerly waiting for your videos. Thanks in Advance
@krishnaik065 жыл бұрын
Uploaded please check the tutorial 7
@sekharpink5 жыл бұрын
@@krishnaik06 thank you..please keep posting more videos..I'm really waiting to watch your videos..really liked your way of explanation
@sandeepganage97175 жыл бұрын
Brilliant explanation!
@pranjalgupta94273 жыл бұрын
Nice 👍👏🥰
@amitjajoo95104 жыл бұрын
Best video on back proportional on internet
@jontyroy1723 Жыл бұрын
In the step where dL/dw[2]11 was shown as addition of two separate chain rule outputs, should it not be dL/dw[2]1 ?
@omkarpatil28545 жыл бұрын
thank you for great explanation, i have a question, with this formula which generates for ( diff(L) / diff (W11)) is completely same for ( diff(L) / diff (W12)) i am i right? does both value gets same difference in weights while back propagation ( though W old value will be different
@SunnyKumar-tj2cy5 жыл бұрын
Same question. What I think, as we are finding out the new weights, the W11 and W12 for HL2, both should be different and should not be added, or I am missing something.
@abhinaspadhi83515 жыл бұрын
@@SunnyKumar-tj2cy Yeah, Both should not be added as they are diff...
@spurthygopal12395 жыл бұрын
Yes i have same question too!
@varunmanjunath62044 жыл бұрын
@@abhinaspadhi8351 its wrong
@rede_neural10 ай бұрын
11:17 are you sure we have to sum them? It doesn't seems like the the two sides are equal when we "cancel" the chain
@maheshvardhan18515 жыл бұрын
great effort...
@ThachDo5 жыл бұрын
10:44 you are pointing to w1_11, but why the formula on board is the derivative w.r.t w2_11?
@winviki1235 жыл бұрын
That's correct. Even I was wondering the same
@dipankarrahuldey62494 жыл бұрын
I think this part dL/dw11^2 should be (dL/dO31 *dO31/O21 *dO21/dO11^2). If we are taking derivative of dL w.r.t w11^2 then,w12^2 doesn't come into play. So,in that case, dL/dO12^2= (dL/dO31 *dO31/O22 *dO22/dw12^2)
@raj46243 жыл бұрын
agree...dw11^2 should be (dL/dO31 *dO31/O21 *dO21/dO11^2). not extra afte addition
@waynewu77636 ай бұрын
how do you take the derivative of d(O31)/dO21? what kind of equations are those?
@saygnileri15713 жыл бұрын
Nice one thnks a lot!
@pranjalbahore69833 жыл бұрын
so insightful @krish
@sivaveeramallu36454 жыл бұрын
excellent Krish
@meanuj15 жыл бұрын
Nice and requested to please add some videos on optimizer...
@ga43ga545 жыл бұрын
Can you please do a Live Q&A session !? Great video... Thank you
@krishnaik065 жыл бұрын
Let me upload some more videos, then I will do a Live Q&A session.
@cynthiamoricordova50993 жыл бұрын
Thank you so much for all your videos. I have a question respect of the value to assign to bias. This value is a random value? I will appreciate your answer.
@camilogonzalezcabrales22274 жыл бұрын
Excellent video, I'm new in the field, could someone explain me how the O's are obtained. Are that O's the result of each neuron computation? are the O's numbers equations?
@chaitanyakumarsomagani5924 жыл бұрын
krish sir, is it w12^2 is depends on w11^2 then only we can do differentiation. w12^2 is going one way and w11^2 is going another way.
@axelrocco2760 Жыл бұрын
Sir, I have a doubt , how will we calculate del(o31)/del(o21) , both are functions
@mikelrecacoechea87303 жыл бұрын
Hey Krish, god explanation I think there is one correction. In the end, you explained for w11^2, what I feel is, it is for w11^1.
@tintintintin5764 жыл бұрын
so helpful video :) thanks
@nikhilramabadran29593 жыл бұрын
for calculating the loss function wrt W112 why do you also consider the other branch leading to the output ?? Kindly reply
@nikhilramabadran29593 жыл бұрын
it's mentioned clearly that it's wrt only W112 - the reason I'm asking this question
@utkarshashinde91674 жыл бұрын
Sir , If to every single neuron in hidden layer we are giving same weights and features with bias then what is the use of multiple neurons in single layer?
@aswinthviswakumar643 жыл бұрын
Great Video and a Great initiative sir from 12:07 if we use same method to calculate dL/dW12^2 it will be the same as dL/dW11^2. is this the correct way or am I getting it wrong thank you!
@siddharthdedhia114 жыл бұрын
Skip to 3:50 If you've watched the previous videos
@anshuyadav243 ай бұрын
How do we know that we reached to a global mimima and we don’t need to update weights?
@bsivarahulreddy3 жыл бұрын
Sir, O31 is also impacted by weight W11(3) ryt? why we are not taking that derivative in chain rule?
@tabilyst4 жыл бұрын
Hi Krish, can you pls let me know, if we are calculating the derivative of W2 11 weight then why we are adding derivative of W2 12 weight in that. ? pls clear
@hope22513 жыл бұрын
10:30 i dont think w112 is effecting o22, so the plus oart should not come
@grownupgaming3 жыл бұрын
Yes, that is what i feel too!
@jerryys3 жыл бұрын
Great job! Does the last derivative need the second part? I do not get it.
@kartikesood82423 жыл бұрын
d(O22) will also be differentiated but with respect to w11, thus it will come out to be zero. Hence take it or not, result will be the same
@shindepratibha314 жыл бұрын
Hey Krish, your way of explanation is good. I think there is one correction. In the end, you explained for w11^2, what I feel is, it is for w11^1. It would be really helpful if you correct it because many are getting confused with it.
@aneeshkalita74522 жыл бұрын
I think the same.. But great method of teaching.. there is no doubting that
@utkarshdadhich7713 жыл бұрын
@krish naik Correction at 13:05.. I guess Loss should be dcecreasing not increasing with to every epoch.
@sandipansarkar92114 жыл бұрын
yeah I did understand chain rule but being a fresher please provide some easy to study articles on chain rule so that i can increase my understanding before proceeding further.
@bibhutiswain1755 жыл бұрын
Really helpful for me.
@satishkundanagar32374 жыл бұрын
"Why" back propagation works in learning weights of the neural networks? What is the intuition behind using back propagation to update the weights? I know that we are trying to make corrections w.r.t the predicted value if the predicted value has some errors when compared to the actual value.
@shashishankar13524 жыл бұрын
so depending on output predicted and output expected, we derive our loss function or cost function. By any mean if we can minimize overall loss of network our predicted output and expected output will reach closer which we want. Now think what we have in our hands to tweak so that loss can be minimized: 1. model hyper parameters (learning rate, no of layers, units in layer), 2. weight and bias for units across all layers. For option 2, we use back propagation where we take partial derivative of loss function w.r.t to unit weight and adjust that in unit weight. When we say partial derivative on loss function that actually mean drawing gradient(literal meaning slope in multidimensional plane), that gradient could be in upward direction or downward direction, so if gradient is negative then we are walking in downward direction which mean we are minimizing the total loss per loss function.
@satishkundanagar32374 жыл бұрын
@@shashishankar1352 Thanks for your reply. BGD/SGD is used to solve the optimization problem at hand and back propagation is a technique that is used in sync with Gradient descent for tuning weights and bias. Whatever you explained are all facts that have been researched, documented and the same are being used in implementing solutions across various fields. I'm looking for a mathematical and geometrical explanation as well as proof on why back propagation works.
@pratikgudsurkar88924 жыл бұрын
We are solving supervised learning problem that's why we have loss as actual-predicted , what in case of unsupervised where we don't have y actual how the loss is calculated and how the updation happen
@benvelloor4 жыл бұрын
I don't think there will be back propogation in unsupervised learning!
@aravindvarma56795 жыл бұрын
Thanks Krish...
@Philanthropic-fg8xx10 ай бұрын
Then what will be the formula for derivative of loss wrt w12^2 ?
@Skandawin785 жыл бұрын
Do u update the bias during backpropagation along with weights? Or does it remain constant after the initialization?
@krishnaik065 жыл бұрын
Yes we have to update the bais too
@kasimidrisi76024 жыл бұрын
His sir i think there is something wrong wrong because the w11 to the suffix 2 is not impacted with the w12 to the suffix 2..! But this playlist is really helpfull to me thankyou sir...:)
@ravikumarhaligode29493 жыл бұрын
Hi Kasim, I am also having same query
@enquiryadmin83265 жыл бұрын
in the back propagation, calculation of gradients using the chain rule for the w11^1, i think we need to consider 6 paths. please kindly clarify.