Backpropagation : Data Science Concepts

  Рет қаралды 32,895

ritvikmath

ritvikmath

3 жыл бұрын

The tricky backprop method in neural networks ... clearly explained!
Intro Neural Networks Video : • Intro to Neural Networ...

Пікірлер: 93
@Justin-zw1hx
@Justin-zw1hx Жыл бұрын
This is exactly what you need when you study Back-propagation, this is a fundamental understanding of how it works. I hope KZbin algorithm can push this video to more people.
@ritvikmath
@ritvikmath Жыл бұрын
Thanks!
@e555t66
@e555t66 9 ай бұрын
Yes. For the algorithm.
@DawitMengistuAbajifar
@DawitMengistuAbajifar Күн бұрын
This is like the 100th video I'm watching, and I can tell he's actually trying to make us understand. I wish I can talk to you and ask you what's not clear to me, thanks for the attempt!! I still don't understand!
@user-tt1zf1cy7t
@user-tt1zf1cy7t Ай бұрын
Hi, I just want you to know that: you are one of the best teachers on KZbin can clearly explain these hard materials and transform them in a simple way.
@WatermelonSaurus
@WatermelonSaurus 3 жыл бұрын
This video is the best version of explanation anyone can get to understand what backward propagation actually is. I wish I had this video 3 years earlier lol
@ritvikmath
@ritvikmath 3 жыл бұрын
thanks!
@rudyorre
@rudyorre 3 жыл бұрын
Man this channel is going to be big someday. Keep it up man!
@ritvikmath
@ritvikmath 3 жыл бұрын
I appreciate that!
@harrykang8128
@harrykang8128 14 күн бұрын
Definitely better than my Coursera course. Worth watching ads for the quality video!
@henryabkin
@henryabkin Жыл бұрын
My god, that explanation of the chain rule blew my mind. You have such a gift being able to explain seemingly complex topics so intuitively, in all of your videos. You deserve many more subscribers.
@user-tq1wl8ji5i
@user-tq1wl8ji5i 5 ай бұрын
Man you are so good. I love the fact that you start in first principle and you define things Mathematically instead of using analogies. Thanks a lot man and may God bless you and your hustle.
@user-th3km1cn4q
@user-th3km1cn4q 7 ай бұрын
Actually, it is only the beginning of my road in Data Science and Time Series Forecasting, but your videos saved my life throughout my whole life in uni! These are the most easy to understand and clearest videos that I have seen, with no doubts! Please keep up your work, it is extremely necessary for people like us, we are very grateful and appreciate it
@elluranands6904
@elluranands6904 Жыл бұрын
Chain rule explanation is like, just Wow...Amazing.
@ritvikmath
@ritvikmath Жыл бұрын
thanks!
@email4ady
@email4ady 3 жыл бұрын
Hugely underrated! You got a flavor for explaining few others got!
@dansantner
@dansantner Жыл бұрын
This is the most helpful intro video on back propagation I've seen.
@teegnas
@teegnas 3 жыл бұрын
Great addition to your channel ... thanks for uploading
@ritvikmath
@ritvikmath 3 жыл бұрын
Glad you enjoy it!
@ernestodemenibus2803
@ernestodemenibus2803 7 ай бұрын
This is simply amazing. The chain rule explanation unlocked the understanding. Thank you sir!
@frankd1156
@frankd1156 3 жыл бұрын
Wow finally i understood backpropagation....i think everyone can understand if u have the right teacher.It takes two to tangle
@AcTheMace
@AcTheMace Жыл бұрын
This is my first time actually understanding this! Thank you! Thank goodness I did calc a few years ago . . .
@shakilkhan4306
@shakilkhan4306 9 ай бұрын
I really like to go through everything mathematically___ And i found you doing that job.. It's great
@alejandrocanada2180
@alejandrocanada2180 6 ай бұрын
Best video i have seen about backpropagation, thank you very much
@whimsicalkins5585
@whimsicalkins5585 10 ай бұрын
All I can say is... you deserve a big hug. Fantastic Teaching
@HoHoHaHaTV
@HoHoHaHaTV 9 ай бұрын
What a great explanation!!! Such clear explanation!!! Thank you for teachingso beautifully, Ritvik. I feel fortunate to come across this video.
@brhnkh
@brhnkh Ай бұрын
You are exceptionally good at explaining difficult/intricate subjects clearly! Thank you for doing this!
@ritvikmath
@ritvikmath Ай бұрын
You're very welcome!
@i07849
@i07849 3 ай бұрын
Now this is some explanation, thankyou sir for not just teaching the math operations like everyone else
@qianchen7680
@qianchen7680 2 жыл бұрын
Everthing you explained is so simple and intuitive! I suffered from this semester's machine learning and you really are the SAVIOR of this course! Thank you sooooo much!
@akilimali8726
@akilimali8726 3 жыл бұрын
Articulate to the core, you're gifted man...
@ritvikmath
@ritvikmath 3 жыл бұрын
Thanks I appreciate it!
@TheProblembaer2
@TheProblembaer2 11 ай бұрын
You and three blue brown. Best Math KZbinrs out there.
@josephbattesti7074
@josephbattesti7074 2 жыл бұрын
Have ML exam in two days, and i'm enjoying these videos a lot. Very clear explenations, thankyou !!!!!
@jamesboumalhab7337
@jamesboumalhab7337 11 ай бұрын
Very well explained. Much appreciated!
@user-tt1zf1cy7t
@user-tt1zf1cy7t Ай бұрын
Guys, hit the like button, we need teachers like him, the likes button will help them to stay and create more videos!!!!
@huskyberlari1964
@huskyberlari1964 Жыл бұрын
these are great videos, I save my time watching these and understand basic concepts, rather that scratching around internet to find what i need. Thanks ritvikmath
@yannkomenan2612
@yannkomenan2612 3 ай бұрын
Thank you so so much for your explanation 🙏🏿 I think that I finally understand how backpropagation works. God bless you 🙏🏿
@zilaleizaldin1834
@zilaleizaldin1834 Ай бұрын
Big Big Big like!!! Really you are a very good teacher. I hope I can do like this in my language so the students will benefit and understand the concept of Backpropogation!
@apterixbr
@apterixbr Жыл бұрын
excellent class! the best one for me. very intuitive!
@dcrespin
@dcrespin Жыл бұрын
It may be worth to note that instead of partial derivatives one can work with derivatives as the linear transformations they really are. Also, looking at the networks in a more structured manner makes clear that the basic ideas of BPP apply to very general types of neural networks. Several steps are involved. 1.- More general processing units. Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong, beyond Euclidean spaces, to any Hilbert space. Derivatives are linear transformations and the derivative of a neural processing unit is the direct sum of its partial derivatives with respect to the inputs and with respect to the weights. This is a linear transformation expressed as the sum of its restrictions to a pair of complementary linear subspaces. 2.- More general layers (any number of units). Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a unique layer is equivalent to taking their product (as functions, in the sense of set theory). The layers are functions of the of inputs and of the weights of the totality of the units. The derivative of a layer is then the product of the derivatives of the units; this is a product of linear transformations. 3.- Networks with any number of layers. A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers; this is a composition of linear transformations. 4.- Quadratic error of a function. ... --- With the additional text down below this is going to be excessively long. Hence I will stop the itemized previous comments. The point is that a sufficiently general, precise and manageable foundation for NNs clarifies many aspects of BPP. If you are interested in the full story and have some familiarity with Hilbert spaces please google for our paper dealing with Backpropagation in Hilbert spaces. A related article with matrix formulas for backpropagation on semilinear networks is also available. We have developed a completely new deep learning algorithm called Neural Network Builder (NNB) which is orders of magnitude more efficient, controllable, precise and faster than BPP. The NNB algorithm assumes the following guiding principle: The neural networks that recognize given data, that is, the “solution networks”, should depend only on the training data vectors. Optionally the solution network may also depend on parameters that specify the distances of the training vectors to the decision boundaries, as chosen by the user and up to the theoretically possible maximum. The parameters specify the width of chosen strips that enclose decision boundaries, from which strips the data vectors must stay away. When using the traditional BPP the solution network depends, besides the training vectors, in guessing a more or less arbitrary initial network architecture and initial weights. Such is not the case with the NNB algorithm. With the NNB algorithm the network architecture and the initial (same as the final) weights of the solution network depend only on the data vectors and on the decision parameters. No modification of weights, whether incremental or otherwise, need to be done. For a glimpse into the NNB algorithm, search in this platform our video about : NNB Deep Learning Without Backpropagation. In the description of the video links to a free demo software will be found. The new algorithm is based on the following very general and powerful result (google it): Polyhedrons and Perceptrons Are Functionally Equivalent. For the conceptual basis of general NNs in see our article Neural Network Formalism. Regards, Daniel Crespin
@sowmiya_rocker
@sowmiya_rocker 7 ай бұрын
Wonderful explanation! Thank you.
@gregheth
@gregheth 2 жыл бұрын
Thank you. I finally understand back propagation
@ritvikmath
@ritvikmath 2 жыл бұрын
Glad!
@alcachi
@alcachi 10 ай бұрын
Very good explanation! The right balance between building intuition and formulas! As a minor improvement I think it could help to finally show the derivatives for each weight so it can be seen how the terms repeat backwards in the formulas.
@joaomanuelaraujo250
@joaomanuelaraujo250 Күн бұрын
Greatly helpful and informative video, thank you very much!
@bash2004
@bash2004 10 ай бұрын
You sir, are a legend. Thanks !
@Michael-yu9ix
@Michael-yu9ix Жыл бұрын
Hands down the best explanation of backpropagation. Thanks for making these videos! Do you have a patreon or something to support you?
@yassine20909
@yassine20909 Жыл бұрын
Great explanation, no surprise it's ranked #3 on backpropagation key word search on KZbin just after 3blue1brown and statquest videos on the subject. Nice work 👍
@vadnone1913
@vadnone1913 2 жыл бұрын
Heh, bless You mate! Thank you for using a simple language with no ‘Fancy’ words. You and ‘3Blue1Brown’ give a way better understanding of ML, compared to my program in Uni. You are making the world a better place!
@basselkordy8223
@basselkordy8223 Жыл бұрын
excellent explanation, thank you very much!
@MEETPATEL-ut3qg
@MEETPATEL-ut3qg Жыл бұрын
Appreciated man the way of presentation
@alijohnnaqvi6383
@alijohnnaqvi6383 Жыл бұрын
Best explanation everrr!!
@anirbansarkar6306
@anirbansarkar6306 2 жыл бұрын
Your videos are immensely helpful. I love watching you explaining complex concepts in the most simple manner. Thank you so much.
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
The explanation on caching was really helpful.
@ritvikmath
@ritvikmath 3 жыл бұрын
good to hear!
@tudorciutacu5568
@tudorciutacu5568 7 ай бұрын
Great explanation!
@jvjjjvvv9157
@jvjjjvvv9157 2 жыл бұрын
I am currently doing the MIT Statistics and Data Science Micromaster and a few times already I have relied on your videos for a clearer, more high-level explanation of certain concepts. Even when I already understand the concepts, like it is the case with this issue of backpropagation, I often find it useful to watch them simply in order to strengthen and reinforce my intuition. So, thank you. You do good work!
@anthonnymonterroso8886
@anthonnymonterroso8886 Жыл бұрын
I never comment on any KZbin video but I have to say your clear communication on describing the core concepts is simply amazing. I especially appreciate you walking through every little piece of information and focusing on the intuition which helps the formulas look less daunting. I will definitely forward your explanations to anyone I know learning these topics. Thank you and keep up the amazing videos.
@VinhNguyen-ho5px
@VinhNguyen-ho5px 3 жыл бұрын
Excellent video once again! For real application purpose, perhaps you can go over how to deal with imbalance data (eg., undersampling, oversampling, SMOTE) .
@nklebiscorner664
@nklebiscorner664 3 жыл бұрын
thank you for this video, it was really helpful to understand the backpropagation, have you talked in another video about "direct propagation"? And i have a question, why do we prefer back propagation on direct propagation?
@absoluteanagha
@absoluteanagha 2 жыл бұрын
You have great content!!
@overgeared
@overgeared 3 жыл бұрын
useful video, thanks. it would be helpful to also have explained the practical aspects of the training algos...forward prop vs back prop, epochs, batch vs incremental modes, etc. probably more of a ds code than ds concepts topic.
@ritvikmath
@ritvikmath 3 жыл бұрын
Hey great suggestion thanks!
@Ckdude100
@Ckdude100 6 күн бұрын
This was beautiful
@ahmad3823
@ahmad3823 Жыл бұрын
Fantastic job!
@ritvikmath
@ritvikmath Жыл бұрын
Thank you! Cheers!
@hemantjain2510
@hemantjain2510 10 ай бұрын
God Level Explanation
@ritvikmath
@ritvikmath 10 ай бұрын
Glad you think so!
@YingleiZhang
@YingleiZhang Ай бұрын
Million thanks!
@appuaparna2421
@appuaparna2421 3 жыл бұрын
Your explanation is awesome! Can you make a video on the next step as well i.e, gradient descent and finding the minimum error?
@ritvikmath
@ritvikmath 3 жыл бұрын
A gradient descent video is coming out soon! Stay tuned :)
@edwardgongsky8540
@edwardgongsky8540 Ай бұрын
Mister I can not find any video you made about partial derivatives and they show up all the time in deep learning! Can you make one please? Much appreciate what you do.
@bokehbeauty
@bokehbeauty 2 жыл бұрын
Every lesson you teach inspires me. You are the best professor I have ever experienced. Thank you!
@undertaker7523
@undertaker7523 2 жыл бұрын
So is it fair to just say that gradient descent is just the method of parameter optimization we're using? I took an optimization course in college and remember learning about things like Newton's method, Lagrangians, etc. It'd explain the connection between the two topics very nicely
@kisholoymukherjee
@kisholoymukherjee 2 жыл бұрын
Beautiful explanation. I have always believed the people whose own concepts are as clear as crystal also happen to be the best explainers. Your channel proves my intuition right. Kudos.
@FPChris
@FPChris 2 жыл бұрын
During back propagation do you do a forward pass after stepping back each layer to get a new error OR do you go back through all layers then update all weights then do a new forward pass?
@elaxter
@elaxter 2 жыл бұрын
How does the math change with multiple hidden layers? How do you compute the partial derivatives for the 3rd layer going into the 2nd layer?
@emmanuelogbe6809
@emmanuelogbe6809 7 ай бұрын
thanks for sharing
@thachnnguyen
@thachnnguyen 2 ай бұрын
I feel like it's not done. So we understand the idea. Exactly how does it work? You already have that simple NN, why not go into the algorithm steps to see the effect of what you explained? i.e., what to do with the derivatives? How do they help improve on the weights (and biases)?
@eranhasid7630
@eranhasid7630 5 ай бұрын
Maybe it would be better if you also explained the whole process in the neural network more detailed in the end.
@tradewithdani122
@tradewithdani122 11 ай бұрын
Awsome❤
@TheCentaury
@TheCentaury Жыл бұрын
little mistake : you count 9 weights at 1:51... Problem is that you only have 6 weights if you have 2 inputs, 2 hiddens and 1 output. You got confused with your (+1) that you traced with a circle while it's not an input nor a neuron. It's your bias... or you count the bias as another weight which can explain the count of 9
@melissallajarun6740
@melissallajarun6740 Жыл бұрын
thank u 💕
@ritvikmath
@ritvikmath Жыл бұрын
You're welcome 😊
@John-wx3zn
@John-wx3zn 2 ай бұрын
Why does the output of sigma point to h sub 2?
@Abanjostring
@Abanjostring 5 ай бұрын
Thank you for speaking clearly. I can’t understand all the Indians.
@DerreseSol
@DerreseSol Ай бұрын
WOW
@Break_down1
@Break_down1 Жыл бұрын
Waiting for you to drop that marker
@ritvikmath
@ritvikmath Жыл бұрын
haha!
@akshaygulabrao372
@akshaygulabrao372 3 жыл бұрын
.
@jameshopkins3541
@jameshopkins3541 9 ай бұрын
PLEASE DON'T DO MORE VIDS. YOU GET YOUR CIRCUS CLOWN
@fridaynight1500
@fridaynight1500 Жыл бұрын
It was the horribly worst way of explaining something ever. Bro, you explained nothing, you just put those notations into words.
Who's the most important in a social network?
11:28
ritvikmath
Рет қаралды 2,3 М.
Intro to Neural Networks : Data Science Concepts
13:14
ritvikmath
Рет қаралды 16 М.
Sprinting with More and More Money
00:29
MrBeast
Рет қаралды 147 МЛН
He tried to save his parking spot, instant karma
00:28
Zach King
Рет қаралды 19 МЛН
Super gymnastics 😍🫣
00:15
Lexa_Merin
Рет қаралды 9 МЛН
They RUINED Everything! 😢
00:31
Carter Sharer
Рет қаралды 14 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 242 М.
Back Propagation in training neural networks step by step
32:48
Bevan Smith 2
Рет қаралды 45 М.
Bayesian Linear Regression : Data Science Concepts
16:28
ritvikmath
Рет қаралды 73 М.
Backpropagation in Deep Learning | Part 1 | The What?
54:19
Gradient Descent : Data Science Concepts
11:03
ritvikmath
Рет қаралды 32 М.
Neural Network from Scratch | Mathematics & Python Code
32:32
The Independent Code
Рет қаралды 117 М.
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Рет қаралды 16 МЛН
27. Backpropagation: Find Partial Derivatives
52:38
MIT OpenCourseWare
Рет қаралды 57 М.
Sprinting with More and More Money
00:29
MrBeast
Рет қаралды 147 МЛН