Thanks for the excellent explanation. One suggestion: please remove the background music. It is distracting
@lucaspimentell977214 күн бұрын
First vídeo whos explain simple and precise about this. Congrats
@far1din10 күн бұрын
Glad you liked it 😄
@minhnguyenvu947921 күн бұрын
original is a matrix of 5x5, kernel is a matrix of 3x3, then output must be a matrix of (5-3+1) x (5-3+1) or 3x3, not 2x2 as your video
@far1din20 күн бұрын
The stride used on the example in this video is 2, hence the 2x2 output. You would have been correct if the stride was 1 😄
@AjinFrankJ21 күн бұрын
wow this is a gem
@TheGameChallenger25 күн бұрын
nice
@chinedudimonyeka285628 күн бұрын
Thank you
@shivanisrivarshini180Ай бұрын
Great explanation. Thank you sir
@noohayub2188Ай бұрын
what an excellent demonstration of the backpropagation on CNN, you won my heart, literally no one on the internet explains it as clearly as you did but please try to make another video as a sequel to this one where you also use the biases, a more complex exampl
@hewramanwaran6444Ай бұрын
Great Explanation. Thank you very much.
@saultube44Ай бұрын
Nope there are 3 results: 2, 3.17, and 17.1 approx; calculated from the last formula, the ± on the √ gives 4 variations, the 1st and 4th are the same, the 2nd & 3rd give the other 2 results, and there are 2 cubic roots, that's 3 results each
@far1dinАй бұрын
Very much a valid concern! From 5:04, you will see that we first solve for "t". Then, since u=m/t, we substitute in that solution for "t". However, since the baseline is that "u" is a function of t, we must pick the same t. Hence, two variations. In other words, x = t - u/3. Since u = m/t, we substitute u in the formula for x. Therefore, we get that x = t - m/(3t). This practically means that in the written out formula for x, which you see at 5:14, the cube root terms has to be the same. This is why 5.46 and -1.46 (suppose the other two variations) are not valid solutions in 7:47. Don't fully get how you got 3.17 and 17.1 (?) When that's said, I fully understand the confusion which can occur when simply looking at the formula. To avoid the confusion, you could write x = t - m/(3t) and have the definition of "t" written right besides. Let me know if anything! 😃
@saultube44Ай бұрын
@@far1din Yes, I'm confused; Hubris, the pretext humans make "when they have figured out the Universe" ha! 😊 People in Marh/Science like to discard results they don't want, as simple as that, even when clearly the results should be more, but people don't search for the truth, they search for some, convenient truth
@manfredbogner9799Ай бұрын
Sehr gut
@bug8628Ай бұрын
Amazing video!! :D
@far1dinАй бұрын
Thanks! 😄
@manfredbogner9799Ай бұрын
Sehr gut😊😊
@far1dinАй бұрын
Danke 🥺
@franciscobrizuela766Ай бұрын
Thank you! Now I'm one step closer to finishing a model for hw :)
@far1dinАй бұрын
You can do it!
@PiyushBasera-dy9czАй бұрын
❤❤
@AlbertoOrtiz-we2jcАй бұрын
excellent explanation thanks
@far1dinАй бұрын
Glad it was helpful!
@chatgpt-nv5ckАй бұрын
Beautiful🙌
@far1dinАй бұрын
Thank you 🙌
@DarrabEducation2 ай бұрын
More amazing content such that will be apprecaited.
@far1dinАй бұрын
Absolutely
@JamieTx232 ай бұрын
Excellent video! Thanks for taking the time and breaking it down so clearly.
@far1dinАй бұрын
Very welcome!
@tejan84272 ай бұрын
How do we know how many layers or filters we need at each layer ? I mean, how can we construct our architecture.
@ForbiddenPrime2 ай бұрын
Thank you for the source code. this will help me to create some content for my syllabus. with love <3
@far1dinАй бұрын
Glad it was helpful although not the most ideal code 😂
@vishvadoshi9762 ай бұрын
“Beautiful, isn’t it?”
@Param30212 ай бұрын
Amazing CNN series, super intuitive and easy to understand!❤
@SterileNeutrino2 ай бұрын
Nice. I remember working on digit recognition using handcoded analysis of pixel runs a long time ago. It never worked properly 😂 And it was computationally intensive.
@mahmoudhassayoun94752 ай бұрын
Good job , the explanation is super, I hope you do not stop making videos in this calibre . Did you use manim to make this video or an other video editor?
@dhudach2 ай бұрын
I'm new to machine learning and neural networks. Your video is very helpful. I have built a small python script just using numpy and I can train numerous samples. So this is a big picture question. Let's say I've trained my program on thousands of inputs and I'm satisfied. Now I want to see if it can recognize a new input, one not used in training. What weight and bias values do I use? After I'm finished with training, how do I modify the script to 'guess?' It would seem to me that back propagation isn't used because I don't actually have a 'desired' value so I'm not going to calculate loss. What weight and bias values do I use from the training sessions? There are dozens of videos and tutorials on training but I think the missing piece is what to do with the training program to make it become the 'trained' program, the one that guesses new inputs without back propagation.
@suthanraja16572 ай бұрын
Thank you so much!
@rubytejackson2 ай бұрын
This is an exceptional explanation, and I can't thank u more... u have to keep going, u enlighten many student on the planet! that's the best thing a human can do!
@far1din2 ай бұрын
Thank you brother, very much appreciate it! 🔥
@Brandonator242 ай бұрын
I'm curious, why is the first convolution using ReLU and then later convolutions using sigmoid? Edit: Also, when convolving over the previous convolution-max pooling output, we have two 2 images, how are the convolutions from these two separate images combined? Is it just adding them together?
@far1din2 ай бұрын
Hey Brandon! 1. The ReLU and Sigmoid are just serving as examples to showcase the different activation functions. This video is just a «quick» visualization from the longer in depth version. 2. Not sure if I understood, but if your referring to the filters, they are added. I go through the math behind this with visualizations in the in depth video. I believe it should clarify your doubts! 😄
@Brandonator242 ай бұрын
@@far1din Will be checking that out, thanks!
@jaberfarahani66452 ай бұрын
the best channel❤
@shirmithNirmal-2 ай бұрын
That was awesome explanation
@SamuelMoyoNdovie2 ай бұрын
What an explanation man 🫡
@chinmaythummalapalli86552 ай бұрын
I racked my brain for hours and couldn't figure out why the features' maps aren't multiplying after each layer and this video just helped me realize they become channels of images , it helped me relax and I think I can go downstairs for dinner now.
@far1din2 ай бұрын
Glad it helped! 😄
@rubytejackson2 ай бұрын
exceptional explanation u did! I have several questions , but first id like to ask is it ok to support u from the thanks button since i dont have any paypal account? thnks warmest regards ruby
@far1din2 ай бұрын
Ofc my friend! Feel free to shoot me a DM on X if you have any questions aswell 💯
@RUDRARAKESHKUMARGOHIL2 ай бұрын
Sorry if this sound silly but what actually is inflection point ? is it f" or any other geometric intuition ?
@far1din2 ай бұрын
You’re correct. It is the point where the double derivative is equal to 0. There are many geometrical intuitions. For cubic equations, the inflection point serves as the point where there is rotational symmetry. This means that you can rotate a cubic function 180 degrees around the inflection point and still have the same plot. I actually cut this part out as it felt like a digression and I didnt want to prolong the video any more than necessary. Maybe I should have kept it in 😭😂
@RUDRARAKESHKUMARGOHIL2 ай бұрын
If you already made it the better idea would have been to keep it...but btw ty❤@@far1din
@RUDRARAKESHKUMARGOHIL2 ай бұрын
At 2:03 I have a doubt you took the m×x cube divided it into 3 parts and then place that on the other cube but you only covered 3 sides and not all 6...so vol will be 1/2 of t^3 no ?
@RUDRARAKESHKUMARGOHIL2 ай бұрын
Sorry now I got it 😅 it was still somewhat subtle confusion..
@far1din2 ай бұрын
Haha nice! You’ll see that it is a «cube» with sidelengths = t once it starts spinning. 😄
@Minayazdany3 ай бұрын
great videos
@RSLT3 ай бұрын
❤❤❤ Liked and subscribed .
@far1din3 ай бұрын
Watch the full video: kzbin.info/www/bejne/gJPSi5muis9_ic0
@averagemilffan3 ай бұрын
Great video!! Just one question, why does the inflection point of a depressed cubic fall on x=0?
@far1din3 ай бұрын
It is explained at around 06:09 in the video, but maybe not well enough so I’ll try again haha. The «x» value for the inflection point is found by setting f’’(x) = 0. For a general cubic function f(x) = ax^3 + bx^2 + cx + d, the double derivative f’’(x) = 6ax + 2b If we want the inflection point, we have to set f’’(x) = 0, which will give us 0 = 6ax + 2b which in turn will give us x = -b/(3a). This means that we can find the inflection point for any cubic function at -b/(3a). Now, if we want the inflection point at x = 0, we will get that 0 = -b/(3a) which equates to b = 0. If we go back to our initial equation f(x) and set b = 0, we will get f(x) = ax^3 + 0*x^2 + cx + d. This eliminates the x^2 term as we multiply by zero, and leave us with f(x) = ax^3 + cx + d which essentially is a depressed cubic function. I hope this cleared any doubt. Please let me know if there is anything else! 😄
@averagemilffan3 ай бұрын
@@far1din Ah I see, thank you, that explains it pretty well. Cheers on your future videos!
@SelfBuiltWealth3 ай бұрын
beautiful explanation❤
@SelfBuiltWealth3 ай бұрын
this is a very unique and underrated explanation!beautiful work thank you so much❤
@rainbow-cl4rk3 ай бұрын
Nice ! Would you do the same for degree 4?
@far1din3 ай бұрын
Great suggestion! I’ll give it a try if I can find a compelling way to visualize both the problem and the solution. However, the extra dimension might make it challenging :/
@rainbow-cl4rk3 ай бұрын
@@far1din you can add colour to visualise it for example, or draw the projection in R³, there is many way to represent a tesseract
@lucaspimentell977224 күн бұрын
Thanks! This is a complete way to visualize cubics
@윤기좔좔엉덩이3 ай бұрын
What are the criteria for setting filters?
@MarshallBrunerRF3 ай бұрын
I think this is a great explanation! My only thing would be that some of the equation transformations are hard to follow, since they're so rapid fire. Keep up the good work!
@far1din3 ай бұрын
Thank you for the feedback. Rewatching the video now, I understand that the transformation might have been a bit too rapid. Will take that into consideration for the next videos! 😄
@MarshallBrunerRF3 ай бұрын
That intro was so well done! Still watching but just wanted to say that before I forget
@emmanuelsheshi9613 ай бұрын
nice work sir
@srinathchembolu76913 ай бұрын
This is gold. Watching this after reading Michael Nielsen makes the concept crystal clear
@r0cketRacoon4 ай бұрын
tks u very much for this video, but it's probably more helpful if you also add a max pooling layer.
@r0cketRacoon4 ай бұрын
what happens if I specify the convo layer 2 have only 2 dimensions? the same kernel will be applied for both 2 images? then be added?