Backpropagation in Convolutional Neural Networks (CNNs)

  Рет қаралды 50,236

far1din

far1din

Күн бұрын

Пікірлер: 114
@louissimon2463
@louissimon2463 Жыл бұрын
great video, but i don't understand how we can find the value of the dL/dzi terms. At 7:20 you make it seem like dL/dzi = zi, is that correct?
@far1din
@far1din Жыл бұрын
No, they come from the loss function. I explain this at 4:17. It might be a bit unclear so I’ll highly reccomend you watch the video from 3blue1brown: kzbin.info/www/bejne/qnrIeX-kn9hoi5osi=Z6asTm87XWcW1bVn 😃
@rtpubtube
@rtpubtube Жыл бұрын
I'm with @louissimion, you show how dL/dw1 is related to dz1/dw1+... (etc), but you never show/expain where dL/dz1 (etc) comes from. Poof - miracle occurs here. Having a numerical example would help a lot. This "theory/symbology" only post is therefore incomplete/useless from a learing/understanding standpoint.
@mandy11254
@mandy11254 7 ай бұрын
​@@rtpubtubeIt's quite literally what he wrote. He hasn't defined a loss function so that's just what it is from the chain rule. If you're asking how the actual value of dL/dz1 is computed, the last layer has its own set of weights besides the ones shown in the video, in addition to an activation function. You use that and a defined loss function to compute dL/dzi. It's similar to what you see in standard NNs. If you studied neural networks, you should know this. This is a video about CNNs not an intro to NNs. Go study that before this. It's not his job to point out every little thing.
@khayyamnaeem5601
@khayyamnaeem5601 2 жыл бұрын
Why is this channel so underrated? You deserve more subscribers and views.
@eneadriancatalin
@eneadriancatalin Жыл бұрын
Perhaps developers use ad blockers, and as a result, KZbin needs to ensure revenue by not promoting these types of videos (that's my opinion)
@JessieJussMessy
@JessieJussMessy Жыл бұрын
This channel is a hidden gem. Thank you for your content
@nizamuddinkhan9443
@nizamuddinkhan9443 Жыл бұрын
Very well explanation, I search many videos but no body explained regarding change in filter's weight. Thank you so much for this animated simple explanation.
@haideralix
@haideralix Жыл бұрын
I have seen few videos before, this one is by far the best one. It breaks down each concept and answers all the questions that comes in the mind. The progression, the explanation is best
@far1din
@far1din Жыл бұрын
Thank you! 🔥
@abhimanyugupta532
@abhimanyugupta532 7 ай бұрын
Been trying to understand backpropogation in CNN for years until today! Thanks a ton mate!
@yosukesharp
@yosukesharp 7 ай бұрын
it was obvious primitive algo dude... people like you are being called "data scientists" now, which is really sad...
@blubaylon
@blubaylon 2 ай бұрын
​@@yosukesharpget off your high horse
@rubytejackson
@rubytejackson 3 ай бұрын
This is an exceptional explanation, and I can't thank u more... u have to keep going, u enlighten many student on the planet! that's the best thing a human can do!
@far1din
@far1din 3 ай бұрын
Thank you brother, very much appreciate it! 🔥
@noohayub2188
@noohayub2188 2 ай бұрын
what an excellent demonstration of the backpropagation on CNN, you won my heart, literally no one on the internet explains it as clearly as you did but please try to make another video as a sequel to this one where you also use the biases, a more complex exampl
@srinathchembolu7691
@srinathchembolu7691 5 ай бұрын
This is gold. Watching this after reading Michael Nielsen makes the concept crystal clear
@markuskofler2553
@markuskofler2553 Жыл бұрын
Couldn’t explain it better myself … absolutely amazing and comprehensible presentation!
@farrugiamarc0
@farrugiamarc0 9 ай бұрын
This is a topic which is rarely explained online, but it was very clearly explained here. Well done.
@boramin3077
@boramin3077 5 ай бұрын
Best video to understand what is going on the under the hood of CNN.
@jayeshkurdekar126
@jayeshkurdekar126 Жыл бұрын
You are a great example of fluidity of thought and words..great explanation
@far1din
@far1din Жыл бұрын
Thank you my friend. Hope you got some value! :)
@jayeshkurdekar126
@jayeshkurdekar126 Жыл бұрын
@@far1din sure did
@Joker-ez2fm
@Joker-ez2fm Жыл бұрын
Please do not stop making these videos!!!
@far1din
@far1din Жыл бұрын
I won’t let you down Joker 🔥🤝
@giacomorotta6356
@giacomorotta6356 Жыл бұрын
great video, underrated channel , please keep it up with CNN videos!
@zemariamm
@zemariamm Жыл бұрын
Fantastic explanation!! Very clear and detailed, thumbs up!
@msaeid_999
@msaeid_999 20 күн бұрын
Bruh!. This content is so underrated based on the video impression. I was reading the Computer Vision chapter from "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" and I was having it hard time understanding it. After, watching this video I can explain CNN to a lay man.
@heyman620
@heyman620 Жыл бұрын
What a masterpiece.
@paedrufernando2351
@paedrufernando2351 Жыл бұрын
your channel is a Hidden Gem..My suggestion is to start a discord and get some crowd functing and one on ones for people who want to learn from you..youa re gifted in teaching.
@DVSS77
@DVSS77 Жыл бұрын
really clear explanation and good pacing. I felt I understood the math behind back propagation for the first time after watching this video!
@pedroviniciuspereirajunho7244
@pedroviniciuspereirajunho7244 Жыл бұрын
Amazing! I was looking for some material like this a long time ago and only found it here, beautiful :D
@far1din
@far1din Жыл бұрын
Thank you my brother 🔥
@SolathPrime
@SolathPrime 2 жыл бұрын
Well explained now I need to code it my self
@far1din
@far1din 2 жыл бұрын
Haha, that’s the hard part
@SolathPrime
@SolathPrime 2 жыл бұрын
@@far1din I think I came up with a solution Here def backward(self, output_gradient, learning_rate): kernels_gradient = np.zeros(self.kernels_shape) input_gradient = np.zeros(self.input_shape) for i in range(self.depth): for j in range(self.input_depth): kernels_gradient[i, j] = convolve2d(self.input[j], output_gradient[i], "valid") input_gradient[j] += convolve2d(output_gradient[i], self.kernels[i, j], "same") self.kernels -= learning_rate * kernels_gradient self.biases -= learning_rate * output_gradient return input_gradient First i initialized the kernel gradient as an array of zeros with the kernel shape then I iterated through the depth of the kernels the the depth of the input then for each gradient withe respect to the kernel I did the same to compute the input gradients Your vid helped me understand the backward method better So I have to say thank you sooo much for it
@SolathPrime
@SolathPrime 2 жыл бұрын
@@far1din I'll document the solution and but it here when I do please pin the comment
@far1din
@far1din Жыл бұрын
@@SolathPrime That’s great my friend. Will pin 💯
@saikoushik4064
@saikoushik4064 10 ай бұрын
Great Explanation, helped me understand the background working
@RAHUL1181995
@RAHUL1181995 Жыл бұрын
This was really helpful....Thank you so much for the vizualization...Keep up the good work...Looking forward to your future uploads.
@sourabhverma9034
@sourabhverma9034 7 ай бұрын
Really intuitive and great animations.
@Peterpeter-hr8gg
@Peterpeter-hr8gg Жыл бұрын
what i was looking for. well explained
@DSLDataScienceLearn
@DSLDataScienceLearn 11 ай бұрын
great explanation, clear direct and understandable, sub!
@ramazanyel5979
@ramazanyel5979 7 ай бұрын
excellent. the exact video i was looking for.
@shazzadhasan4067
@shazzadhasan4067 Жыл бұрын
Great explanation with cool visual. Thanks a lot.
@far1din
@far1din Жыл бұрын
Thank you my friend 😃
@bamboooooooooooo
@bamboooooooooooo 8 ай бұрын
great job. this explanation is really intuitive
@mahmoudhassayoun9475
@mahmoudhassayoun9475 3 ай бұрын
Good job , the explanation is super, I hope you do not stop making videos in this calibre . Did you use manim to make this video or an other video editor?
@guoguowg1443
@guoguowg1443 8 ай бұрын
great stuff man, crystal clear!
@aikenkazin4096
@aikenkazin4096 Жыл бұрын
Great explanation and visualization
@far1din
@far1din Жыл бұрын
Thank you my friend 🔥🚀
@elgs1980
@elgs1980 Жыл бұрын
Thank you so much!!! This video is so so so well done!
@far1din
@far1din Жыл бұрын
Thank you. Hope you got some value out of this! 💯
@shivanisrivarshini180
@shivanisrivarshini180 2 ай бұрын
Great explanation. Thank you sir
@PlabonTheSadEngineer
@PlabonTheSadEngineer Жыл бұрын
please continue your videos !!
@MarcosDanteGellar
@MarcosDanteGellar Жыл бұрын
the animations were super useful, thanks!
@LeoMarchyok-od5by
@LeoMarchyok-od5by 8 ай бұрын
Best explanation
@chatgpt-nv5ck
@chatgpt-nv5ck 2 ай бұрын
Beautiful🙌
@far1din
@far1din 2 ай бұрын
Thank you 🙌
@osamamohamedos2033
@osamamohamedos2033 8 ай бұрын
Masterpiece 💕💕
@gregorioosorio16687
@gregorioosorio16687 Жыл бұрын
Thanks for sharing!
@aliewayz
@aliewayz 7 ай бұрын
really beautiful, thanks.
@rodrigoroman4886
@rodrigoroman4886 Жыл бұрын
Great video!! Your explanation is the best I have found. Could you please tell me what software you use for the animations ?
@far1din
@far1din Жыл бұрын
I use manim 😃 www.manim.community
@bug8628
@bug8628 2 ай бұрын
Amazing video!! :D
@far1din
@far1din 2 ай бұрын
Thanks! 😄
@AlbertoOrtiz-we2jc
@AlbertoOrtiz-we2jc 2 ай бұрын
excellent explanation thanks
@far1din
@far1din 2 ай бұрын
Glad it was helpful!
@harshitbhandi5005
@harshitbhandi5005 Жыл бұрын
great explanation
@UtkalSinha-s8j
@UtkalSinha-s8j Жыл бұрын
Nicely put, thank you so much.
@dhudach
@dhudach 3 ай бұрын
I'm new to machine learning and neural networks. Your video is very helpful. I have built a small python script just using numpy and I can train numerous samples. So this is a big picture question. Let's say I've trained my program on thousands of inputs and I'm satisfied. Now I want to see if it can recognize a new input, one not used in training. What weight and bias values do I use? After I'm finished with training, how do I modify the script to 'guess?' It would seem to me that back propagation isn't used because I don't actually have a 'desired' value so I'm not going to calculate loss. What weight and bias values do I use from the training sessions? There are dozens of videos and tutorials on training but I think the missing piece is what to do with the training program to make it become the 'trained' program, the one that guesses new inputs without back propagation.
@AsilKhalifa
@AsilKhalifa 6 ай бұрын
Thanks a lot!
@arektllama3767
@arektllama3767 2 жыл бұрын
1:15 why do you iterate in steps of 2? If you iterated by 1 then you could generate a 3x3 layer image. Is that just to save on computation time/complexity or is there something other reason for it?
@far1din
@far1din 2 жыл бұрын
The reason why I used a stride of two (iterations in steps of two) in this video is partially random and partially because I wanted to highlight that the stride when performing backpropagation should be the same as when performing the forward propagation. In most learning materials I have seen, they usually use a stride of one, hence a stride of one for the backpropagation. This could lead to confusion when operating with larger strides. The stride could technically be whatever you like (as long as you keep it within the dimensions of the image/matrix). I could have chosen another number for the stride as you suggested. In that case, with a stride of one, the output would be a 3 x 3 matrix/image. Some will say that a shorter stride will encapsulate more information than a larger one, but this becomes “less true” as the size of the kernel increases. As far as I know there are no “rules” for when to use larger strides and not. Please let me know if this notion has changed as everything changes so quickly in this field! 🙂
@arektllama3767
@arektllama3767 2 жыл бұрын
@@far1din I never considered how stride length could change depending on kernel size. I guess that makes sense, the larger kernel could cover the same data as a small kernel, just in fewer steps/iterations. I also figured you intentionally generated a 2x2 image since that’s a lot simpler than a 3x3 and this an educational video. Thanks for the feedback, that was really insightful!
@akshchaudhary5444
@akshchaudhary5444 11 ай бұрын
amazing video thanks!
@objectobjectobject4707
@objectobjectobject4707 Жыл бұрын
Great example thanks a lot
@ManishKumar-pb9gu
@ManishKumar-pb9gu Жыл бұрын
thanku you so much for this
@r0cketRacoon
@r0cketRacoon 5 ай бұрын
tks u very much for this video, but it's probably more helpful if you also add a max pooling layer.
@ziligao7594
@ziligao7594 7 ай бұрын
Amazing
@manfredbogner9799
@manfredbogner9799 2 ай бұрын
Sehr gut
@ItIsJan
@ItIsJan Жыл бұрын
5:24 does this just mean we divide z1 by w1 and ultiply by L divided by z1 and do that for all z'S to get the partial derivative of L in respect to w1?
@far1din
@far1din Жыл бұрын
It’s not that simple. Doing the actual calculations is a bit more tricky. Given no activation function, Z1 = w1*pixel1 + w2*pixel2 + w3*pixel3… you now have to take the derivative of this with respect to w1, then y = z1*w21 + z2*w22… take the derivative of y with respect to z1 etc. The calculus can be a bit too heavy for a comment like this. I’ll highly reccomend you watch the video by 3blue1brown: kzbin.info/www/bejne/qnrIeX-kn9hoi5osi=Z6asTm87XWcW1bVn 😃
@vishvadoshi976
@vishvadoshi976 3 ай бұрын
“Beautiful, isn’t it?”
@piyushkumar-wg8cv
@piyushkumar-wg8cv Жыл бұрын
Great explanation. Can you please tell which tool do you use for making these videos.
@far1din
@far1din Жыл бұрын
Thank you my friend! I use manim 😃 www.manim.community
@yuqianglin4514
@yuqianglin4514 Жыл бұрын
fab video! help me a lot
@far1din
@far1din Жыл бұрын
Glad to hear that you got some value out of this video! :D
@govindnair5407
@govindnair5407 9 ай бұрын
What is the loss function here, and how are the values in the flattened z matrix used to compute yhat ?
@SiddhantSharma181
@SiddhantSharma181 7 ай бұрын
Is the stride only along the rows, and not along columns? Is is common or just simplified?
@MohamedBENELMOSTAPHA-l4v
@MohamedBENELMOSTAPHA-l4v 10 ай бұрын
I've had no trouble learning about the 'vanilla' neural networks. Although your videos are great, I can't seem to find resources that delve a little deeper into the explanations of how CNNs work. Are there any resources you would recommend ?
@OmidDavoudnia
@OmidDavoudnia 8 ай бұрын
Thanks.
@samiswilf
@samiswilf Жыл бұрын
Well done.
@im-Anarchy
@im-Anarchy Жыл бұрын
perfect, one suggestion make videos a little longer 20-30 is a good number
@far1din
@far1din Жыл бұрын
Haha, most people don't like these kind of videos too long. Average watchtime for this video is about 3minutes :P
@im-Anarchy
@im-Anarchy Жыл бұрын
​@@far1din​oh shii! 3 minutes, that was very unexpected, maybe it's because people revisit the video to revise specific topic.
@far1din
@far1din Жыл бұрын
Must be 💯
@lordcasper3357
@lordcasper3357 Ай бұрын
thanks boss
@bnnbrabnn9142
@bnnbrabnn9142 9 ай бұрын
What about the weights of the fully connected layer
@mandy11254
@mandy11254 7 ай бұрын
No point in adding it to this video since that's something you should know from neural networks. That's why he just leaves it as dL/dzi.
@simbol5638
@simbol5638 Жыл бұрын
+1 sub, excellent video
@far1din
@far1din Жыл бұрын
Thank you! 😃
@PeakyBlinder-lz2gh
@PeakyBlinder-lz2gh 11 ай бұрын
thx
@MoeQ_
@MoeQ_ Жыл бұрын
dL/dzi = ??
@far1din
@far1din Жыл бұрын
I explain the term at 4:17. It might be a bit unclear so I’ll highly reccomend you watch the video from 3blue1brown: kzbin.info/www/bejne/qnrIeX-kn9hoi5osi=Z6asTm87XWcW1bVn 😃
@Тима-щ2ю
@Тима-щ2ю Жыл бұрын
You have nices videos, that helped me better understand the concept of CNN. But, from this video, it is not really obvious that matrix dL/dw - is convolution of image matrix and dL/dz matrix, as showed here kzbin.info/www/bejne/hp-ag35tqdSZhsk. The stride of two is also a little bit confusing
@far1din
@far1din Жыл бұрын
Thank you for the comment! I believe he is doing the exact same thing (?) I chose to have a stride of two in order to highlight that the stride should be similar to the stride used during the forward propagation. Most examples stick with a stride of one. I now realize it might have caused some confusion :p
@minhnguyenvu9479
@minhnguyenvu9479 Ай бұрын
original is a matrix of 5x5, kernel is a matrix of 3x3, then output must be a matrix of (5-3+1) x (5-3+1) or 3x3, not 2x2 as your video
@far1din
@far1din Ай бұрын
The stride used on the example in this video is 2, hence the 2x2 output. You would have been correct if the stride was 1 😄
@burerabiya7866
@burerabiya7866 Жыл бұрын
Hello well explained. I need your presentation
@far1din
@far1din Жыл бұрын
Just download it 😂
@CorruptMem
@CorruptMem Жыл бұрын
I think it's spelled "Convolution"
@far1din
@far1din Жыл бұрын
Haha thank you! 🚀
@int16_t
@int16_t Жыл бұрын
w^* is an abuse of math notation, but it's convenient.
Convolutional Neural Networks from Scratch | In Depth
12:56
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 554 М.
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
BAYGUYSTAN | 1 СЕРИЯ | bayGUYS
36:55
bayGUYS
Рет қаралды 1,9 МЛН
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,8 МЛН
Application of Calculus in Backpropagation
14:45
Orblitz
Рет қаралды 21 М.
Backpropagation in CNN - Part 1
20:03
Learn With Jay
Рет қаралды 62 М.
All Convolution Animations Are Wrong (Neural Networks)
4:53
Animated AI
Рет қаралды 65 М.
Visualizing Convolutional Neural Networks | Layer by Layer
5:53
Neural Networks Explained from Scratch using Python
17:38
Bot Academy
Рет қаралды 352 М.
Convolutional Neural Network from Scratch | Mathematics & Python Code
33:23
The Independent Code
Рет қаралды 194 М.
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 211 М.
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН