What is Back Propagation

  Рет қаралды 43,047

IBM Technology

IBM Technology

11 ай бұрын

Learn about watsonx→ ibm.biz/BdyEjK
Neural networks are great for predictive modeling - everything from stock trends to language translations. But what if the answer is wrong, how do they “learn” to do better? Martin Keen explains that during a process called backward propagation, the generated output is compared to the expected output, and then the error contributed by each neuron (or “node”) is examined. By adjusting the node’s weights and biases, error is reduced and thus the overall accuracy improved.
Get started for free on IBM Cloud → ibm.biz/sign-up-now
Subscribe to see more videos like this in the future → ibm.biz/subscribe-now

Пікірлер: 34
@vencibushy
@vencibushy 3 ай бұрын
Back propagation to neural networks is what negative feedback is to closed loop systems. The understanding come pretty much naturally to the people which studied automation and control engineering. However - many articles tend to mix thing up. In this case back propagation and gradient descent. Back propagation is the process of passing the error back through the layers and using it to recalculate the weights. Gradient descent is the algorithm used for recalculation. There are other algorithms for recalculation of the weights.
@Kiera9000
@Kiera9000 9 ай бұрын
thanks for getting me through my exams cause the script from my professor helps literally nothing in understanding deep learning. Cheers mate
@Zethuzzz
@Zethuzzz Ай бұрын
Remember the chain rule that you learned in high school.Well that’s what is used in Backpropogation
@anant1870
@anant1870 10 ай бұрын
Thanks for this Great explanation MARK 😃
@Mary-ml5po
@Mary-ml5po 10 ай бұрын
I can't get enough of you brilliant videos. Thank you for making what it seemed to me before as complicated easy to understand . Could you please post a video about loss functions and gradient decent?
@im-Anarchy
@im-Anarchy 9 ай бұрын
What did he even taught actually?
@hamidapremani6151
@hamidapremani6151 Ай бұрын
Brilliantly simplified explanation for a fairly complex topic. Thanks, Martin!
@hashemkadri3009
@hashemkadri3009 24 күн бұрын
marvin u mean, smh
@sweealamak628
@sweealamak628 Ай бұрын
Thanks Mardnin!
@sakshammishra9232
@sakshammishra9232 8 ай бұрын
Lovely man..... excellent videos..all complexities eliminated. thanks a lot 😊
@neail5466
@neail5466 10 ай бұрын
Thank you for the information. Could you please tell if the the BP is only available and applicable for Supervised models, as we have to have a pre computed result to compare against!! Certainly, unsupervised models could also use this theoretically but does / could it effect in a positive way? Additionally how the comparison actually performed? Especially for the information that can't be quantised !
@1955subraj
@1955subraj 7 ай бұрын
Very well explained 🎉
@rigbyb
@rigbyb 10 ай бұрын
Great video! 😊
@guliyevshahriyar
@guliyevshahriyar 10 ай бұрын
Thank you!
@msatyabhaskarasrinivasacha5874
@msatyabhaskarasrinivasacha5874 17 күн бұрын
Awesome.....awesome superb explanation sir
@idobleicher
@idobleicher Ай бұрын
A great video!
@pleasethink4789
@pleasethink4789 8 ай бұрын
Hi Marklin! Thank you for such a great explanation. (btw, I know your name is Martin. 😂 )
@ashodapakian2788
@ashodapakian2788 17 күн бұрын
Off topic: what drawing board setup do these IBM videos use ? it's really great.
@boyyang1290
@boyyang1290 12 күн бұрын
I'd like to know, too.
@boyyang1290
@boyyang1290 12 күн бұрын
I find it ,he is drawing on the Glass
@the1111011
@the1111011 9 ай бұрын
why you didn't explain how the network updates the weight
@Ellikka1
@Ellikka1 Ай бұрын
When doing the Loss Function hove is the "Correct" output given? Is it training data and the compared an other data file with desired outcomes? In the example of "Martin" how does the neural network get to know that your name was not Mark?
@jaffarbh
@jaffarbh 10 ай бұрын
Isn't Back Propagation used to lower the computation needed to adjust the weights? I understand that doing so in a "forward" fashion is much more expensive than in a "backward" fashion?
@stefanfueger3487
@stefanfueger3487 10 ай бұрын
Wait ... the video is online for four hours ... and still no question how he manages to write mirrored?
@Aegon1995
@Aegon1995 10 ай бұрын
There’s a separate video for that
@itdataandprocessanalysis3202
@itdataandprocessanalysis3202 10 ай бұрын
🤦‍♂
@IBMTechnology
@IBMTechnology 10 ай бұрын
Ha, that's so true. Here you go: ibm.biz/write-backwards
@tianhanipah9783
@tianhanipah9783 3 ай бұрын
Just flip the video horizontally
@l_a_h797
@l_a_h797 15 күн бұрын
5:36 Actually, convergence is does not necessarily mean the network is able to do its task reliably. It just means that its reliability has reached a plateau. We hope that the plateau is high, i.e. that the network does a good job of predicting the right outputs. For many applications, NNs are currently able to reach a good level of performance. But in general, what is optimal is not always very good. For example, a network with just 1 layer of 2 nodes is not going to be successful at handwriting recognition, even if its model converges.
@mateusz6190
@mateusz6190 15 күн бұрын
Hi, you seem to have good knowledge on this, can I ask you a question please. Do you know if neural networks will be good for recognizing handwritten math expressions? (digits, operators, variables, all elements seperated to be recognized individually). I need a program that would do that and I tried a neural network, it is good for images from dataset but terrible for stuff from outside the dataset. Would you have any tips? I would be really greatful
@boeng9371
@boeng9371 3 ай бұрын
In IBM we trust ✊😔
@mohslimani5716
@mohslimani5716 10 ай бұрын
Thanks still I need to understand how technically does it happen
@AnjaliSharma-dv5ke
@AnjaliSharma-dv5ke 10 ай бұрын
It’s done by calculating the derivatives of the y hats with respect to the weights, and the function done backwards in the network applying the chain rule of calculus
@Justme-dk7vm
@Justme-dk7vm Ай бұрын
ANY CHANCE TO GIVE 1000 LIKES ???😩
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 187 М.
Data Scientist vs. AI Engineer
10:39
IBM Technology
Рет қаралды 31 М.
0% Respect Moments 😥
00:27
LE FOOT EN VIDÉO
Рет қаралды 46 МЛН
Кәріс тіріма өзі ?  | Synyptas 3 | 8 серия
24:47
kak budto
Рет қаралды 1,6 МЛН
Don’t take steroids ! 🙏🙏
00:16
Tibo InShape
Рет қаралды 28 МЛН
😱СНЯЛ СУПЕР КОТА НА КАМЕРУ⁉
00:37
OMG DEN
Рет қаралды 1,8 МЛН
The Essential Main Ideas of Neural Networks
18:54
StatQuest with Josh Starmer
Рет қаралды 857 М.
Machine Learning vs Deep Learning
7:50
IBM Technology
Рет қаралды 594 М.
Backpropagation from Scratch in Python
16:27
Machine Learning Explained
Рет қаралды 25 М.
Gradient Descent Explained
7:05
IBM Technology
Рет қаралды 53 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
AI vs Machine Learning
5:49
IBM Technology
Рет қаралды 915 М.
What are Generative AI models?
8:47
IBM Technology
Рет қаралды 891 М.
What is Retrieval-Augmented Generation (RAG)?
6:36
IBM Technology
Рет қаралды 470 М.
0% Respect Moments 😥
00:27
LE FOOT EN VIDÉO
Рет қаралды 46 МЛН