Back propagation to neural networks is what negative feedback is to closed loop systems. The understanding come pretty much naturally to the people which studied automation and control engineering. However - many articles tend to mix thing up. In this case back propagation and gradient descent. Back propagation is the process of passing the error back through the layers and using it to recalculate the weights. Gradient descent is the algorithm used for recalculation. There are other algorithms for recalculation of the weights.
@Kiera9000 Жыл бұрын
thanks for getting me through my exams cause the script from my professor helps literally nothing in understanding deep learning. Cheers mate
@saisrikaranpulluri14723 күн бұрын
Incredible, Martin example made me understand the concept exactly. Your real life examples are great alongside entertaining.
@hamidapremani61519 ай бұрын
Brilliantly simplified explanation for a fairly complex topic. Thanks, Martin!
@hashemkadri30099 ай бұрын
marvin u mean, smh
@BrianMarcWhittaker2 ай бұрын
Thank you for explaining this. I'm reading “Architects of Intelligence” and that's the first time I’ve heard the term backpropagation. Your examples and drawings help me better understand the topic.
@Zethuzzz10 ай бұрын
Remember the chain rule that you learned in high school.Well that’s what is used in Backpropogation
@Mary-ml5po Жыл бұрын
I can't get enough of you brilliant videos. Thank you for making what it seemed to me before as complicated easy to understand . Could you please post a video about loss functions and gradient decent?
@im-Anarchy Жыл бұрын
What did he even taught actually?
@anant1870 Жыл бұрын
Thanks for this Great explanation MARK 😃
@ca17907 ай бұрын
The gradient is passed backward using the chain rule from calculus. The gradient is just a multivariable form of the derivative. It is an actual numerical quantity for each "atomic" part of the network; usually a neuron's weights and bias.
@mercyl23552 ай бұрын
Thanks Marlon.
@Adnanuni3 ай бұрын
Thank you Mariin😃
@pleasethink4789 Жыл бұрын
Hi Marklin! Thank you for such a great explanation. (btw, I know your name is Martin. 😂 )
@sakshammishra9232 Жыл бұрын
Lovely man..... excellent videos..all complexities eliminated. thanks a lot 😊
@brpawankumariyengar422717 күн бұрын
Very Good Video …. Thank you very much ❤
@RadiantNij5 ай бұрын
Great work, so easy to understand
@joeyoviedo52024 ай бұрын
Thank you so much Morlin! Great video
@KamleshSingh-um9jy7 ай бұрын
Excellent session ..thank you !!
@l_a_h7979 ай бұрын
5:36 Actually, convergence is does not necessarily mean the network is able to do its task reliably. It just means that its reliability has reached a plateau. We hope that the plateau is high, i.e. that the network does a good job of predicting the right outputs. For many applications, NNs are currently able to reach a good level of performance. But in general, what is optimal is not always very good. For example, a network with just 1 layer of 2 nodes is not going to be successful at handwriting recognition, even if its model converges.
@mateusz61909 ай бұрын
Hi, you seem to have good knowledge on this, can I ask you a question please. Do you know if neural networks will be good for recognizing handwritten math expressions? (digits, operators, variables, all elements seperated to be recognized individually). I need a program that would do that and I tried a neural network, it is good for images from dataset but terrible for stuff from outside the dataset. Would you have any tips? I would be really greatful
@npomfret3 ай бұрын
This would really benefit from a (simple) worked example
@msatyabhaskarasrinivasacha58749 ай бұрын
Awesome.....awesome superb explanation sir
@ashodapakian27889 ай бұрын
Off topic: what drawing board setup do these IBM videos use ? it's really great.
@boyyang12909 ай бұрын
I'd like to know, too.
@boyyang12909 ай бұрын
I find it ,he is drawing on the Glass
@EMos484 ай бұрын
Awesome thank you Marvin.
@sweealamak6289 ай бұрын
Thanks Mardnin!
@boeng9371 Жыл бұрын
In IBM we trust ✊😔
@stefanfueger3487 Жыл бұрын
Wait ... the video is online for four hours ... and still no question how he manages to write mirrored?
@Aegon1995 Жыл бұрын
There’s a separate video for that
@IBMTechnology Жыл бұрын
Ha, that's so true. Here you go: ibm.biz/write-backwards
@tianhanipah9783 Жыл бұрын
Just flip the video horizontally
@1955subraj Жыл бұрын
Very well explained 🎉
@somethingdifferent19106 ай бұрын
At 2:20 when he was talking about biases, does it have any relation with Hyperparameter or regularization unit?
@ramuk-4 ай бұрын
thanks Marvin!
@Ellikka19 ай бұрын
When doing the Loss Function hove is the "Correct" output given? Is it training data and the compared an other data file with desired outcomes? In the example of "Martin" how does the neural network get to know that your name was not Mark?
@harrybellingham985 ай бұрын
probably would have been good to describe that this is supervised learning as this would nit translate well for a beginner trying to apply this to other form of NNs
@rishidubey87457 ай бұрын
thanks marvin
@rigbyb Жыл бұрын
Great video! 😊
@idobleicher10 ай бұрын
A great video!
@sahanseney1347 ай бұрын
cheers Marvin
@neail5466 Жыл бұрын
Thank you for the information. Could you please tell if the the BP is only available and applicable for Supervised models, as we have to have a pre computed result to compare against!! Certainly, unsupervised models could also use this theoretically but does / could it effect in a positive way? Additionally how the comparison actually performed? Especially for the information that can't be quantised !
@jaffarbh Жыл бұрын
Isn't Back Propagation used to lower the computation needed to adjust the weights? I understand that doing so in a "forward" fashion is much more expensive than in a "backward" fashion?
@the1111011 Жыл бұрын
why you didn't explain how the network updates the weight
@guliyevshahriyar Жыл бұрын
Thank you!
@mr.wiksith50915 ай бұрын
thank youu
@mohslimani5716 Жыл бұрын
Thanks still I need to understand how technically does it happen
@AnjaliSharma-dv5ke Жыл бұрын
It’s done by calculating the derivatives of the y hats with respect to the weights, and the function done backwards in the network applying the chain rule of calculus
@gren5096 ай бұрын
Save yourself 8 minutes.. It's a FEEDBACK loop - FFS !
@Justme-dk7vm9 ай бұрын
ANY CHANCE TO GIVE 1000 LIKES ???😩
@vservicesvservices70956 ай бұрын
Try to use more unexplained terminology to explain the terminology you try to explain is the source of confusion. 😂 Thumb down.
@tsvigo11_706 ай бұрын
A neural network cannot be connected by weights, this is nonsense. It can be connected by synapses, that is, by resistances. The way the network learns is incredibly tricky: not only does it have to remember the correct result, which is not easy in itself, but it has to continue to remember the correct result while remembering a new correct result. This is what distinguishes a neural network from a fishing net.