What is Back Propagation

  Рет қаралды 84,744

IBM Technology

IBM Technology

Күн бұрын

Пікірлер: 54
@vencibushy
@vencibushy Жыл бұрын
Back propagation to neural networks is what negative feedback is to closed loop systems. The understanding come pretty much naturally to the people which studied automation and control engineering. However - many articles tend to mix thing up. In this case back propagation and gradient descent. Back propagation is the process of passing the error back through the layers and using it to recalculate the weights. Gradient descent is the algorithm used for recalculation. There are other algorithms for recalculation of the weights.
@Kiera9000
@Kiera9000 Жыл бұрын
thanks for getting me through my exams cause the script from my professor helps literally nothing in understanding deep learning. Cheers mate
@saisrikaranpulluri1472
@saisrikaranpulluri1472 3 күн бұрын
Incredible, Martin example made me understand the concept exactly. Your real life examples are great alongside entertaining.
@hamidapremani6151
@hamidapremani6151 9 ай бұрын
Brilliantly simplified explanation for a fairly complex topic. Thanks, Martin!
@hashemkadri3009
@hashemkadri3009 9 ай бұрын
marvin u mean, smh
@BrianMarcWhittaker
@BrianMarcWhittaker 2 ай бұрын
Thank you for explaining this. I'm reading “Architects of Intelligence” and that's the first time I’ve heard the term backpropagation. Your examples and drawings help me better understand the topic.
@Zethuzzz
@Zethuzzz 10 ай бұрын
Remember the chain rule that you learned in high school.Well that’s what is used in Backpropogation
@Mary-ml5po
@Mary-ml5po Жыл бұрын
I can't get enough of you brilliant videos. Thank you for making what it seemed to me before as complicated easy to understand . Could you please post a video about loss functions and gradient decent?
@im-Anarchy
@im-Anarchy Жыл бұрын
What did he even taught actually?
@anant1870
@anant1870 Жыл бұрын
Thanks for this Great explanation MARK 😃
@ca1790
@ca1790 7 ай бұрын
The gradient is passed backward using the chain rule from calculus. The gradient is just a multivariable form of the derivative. It is an actual numerical quantity for each "atomic" part of the network; usually a neuron's weights and bias.
@mercyl2355
@mercyl2355 2 ай бұрын
Thanks Marlon.
@Adnanuni
@Adnanuni 3 ай бұрын
Thank you Mariin😃
@pleasethink4789
@pleasethink4789 Жыл бұрын
Hi Marklin! Thank you for such a great explanation. (btw, I know your name is Martin. 😂 )
@sakshammishra9232
@sakshammishra9232 Жыл бұрын
Lovely man..... excellent videos..all complexities eliminated. thanks a lot 😊
@brpawankumariyengar4227
@brpawankumariyengar4227 17 күн бұрын
Very Good Video …. Thank you very much ❤
@RadiantNij
@RadiantNij 5 ай бұрын
Great work, so easy to understand
@joeyoviedo5202
@joeyoviedo5202 4 ай бұрын
Thank you so much Morlin! Great video
@KamleshSingh-um9jy
@KamleshSingh-um9jy 7 ай бұрын
Excellent session ..thank you !!
@l_a_h797
@l_a_h797 9 ай бұрын
5:36 Actually, convergence is does not necessarily mean the network is able to do its task reliably. It just means that its reliability has reached a plateau. We hope that the plateau is high, i.e. that the network does a good job of predicting the right outputs. For many applications, NNs are currently able to reach a good level of performance. But in general, what is optimal is not always very good. For example, a network with just 1 layer of 2 nodes is not going to be successful at handwriting recognition, even if its model converges.
@mateusz6190
@mateusz6190 9 ай бұрын
Hi, you seem to have good knowledge on this, can I ask you a question please. Do you know if neural networks will be good for recognizing handwritten math expressions? (digits, operators, variables, all elements seperated to be recognized individually). I need a program that would do that and I tried a neural network, it is good for images from dataset but terrible for stuff from outside the dataset. Would you have any tips? I would be really greatful
@npomfret
@npomfret 3 ай бұрын
This would really benefit from a (simple) worked example
@msatyabhaskarasrinivasacha5874
@msatyabhaskarasrinivasacha5874 9 ай бұрын
Awesome.....awesome superb explanation sir
@ashodapakian2788
@ashodapakian2788 9 ай бұрын
Off topic: what drawing board setup do these IBM videos use ? it's really great.
@boyyang1290
@boyyang1290 9 ай бұрын
I'd like to know, too.
@boyyang1290
@boyyang1290 9 ай бұрын
I find it ,he is drawing on the Glass
@EMos48
@EMos48 4 ай бұрын
Awesome thank you Marvin.
@sweealamak628
@sweealamak628 9 ай бұрын
Thanks Mardnin!
@boeng9371
@boeng9371 Жыл бұрын
In IBM we trust ✊😔
@stefanfueger3487
@stefanfueger3487 Жыл бұрын
Wait ... the video is online for four hours ... and still no question how he manages to write mirrored?
@Aegon1995
@Aegon1995 Жыл бұрын
There’s a separate video for that
@IBMTechnology
@IBMTechnology Жыл бұрын
Ha, that's so true. Here you go: ibm.biz/write-backwards
@tianhanipah9783
@tianhanipah9783 Жыл бұрын
Just flip the video horizontally
@1955subraj
@1955subraj Жыл бұрын
Very well explained 🎉
@somethingdifferent1910
@somethingdifferent1910 6 ай бұрын
At 2:20 when he was talking about biases, does it have any relation with Hyperparameter or regularization unit?
@ramuk-
@ramuk- 4 ай бұрын
thanks Marvin!
@Ellikka1
@Ellikka1 9 ай бұрын
When doing the Loss Function hove is the "Correct" output given? Is it training data and the compared an other data file with desired outcomes? In the example of "Martin" how does the neural network get to know that your name was not Mark?
@harrybellingham98
@harrybellingham98 5 ай бұрын
probably would have been good to describe that this is supervised learning as this would nit translate well for a beginner trying to apply this to other form of NNs
@rishidubey8745
@rishidubey8745 7 ай бұрын
thanks marvin
@rigbyb
@rigbyb Жыл бұрын
Great video! 😊
@idobleicher
@idobleicher 10 ай бұрын
A great video!
@sahanseney134
@sahanseney134 7 ай бұрын
cheers Marvin
@neail5466
@neail5466 Жыл бұрын
Thank you for the information. Could you please tell if the the BP is only available and applicable for Supervised models, as we have to have a pre computed result to compare against!! Certainly, unsupervised models could also use this theoretically but does / could it effect in a positive way? Additionally how the comparison actually performed? Especially for the information that can't be quantised !
@jaffarbh
@jaffarbh Жыл бұрын
Isn't Back Propagation used to lower the computation needed to adjust the weights? I understand that doing so in a "forward" fashion is much more expensive than in a "backward" fashion?
@the1111011
@the1111011 Жыл бұрын
why you didn't explain how the network updates the weight
@guliyevshahriyar
@guliyevshahriyar Жыл бұрын
Thank you!
@mr.wiksith5091
@mr.wiksith5091 5 ай бұрын
thank youu
@mohslimani5716
@mohslimani5716 Жыл бұрын
Thanks still I need to understand how technically does it happen
@AnjaliSharma-dv5ke
@AnjaliSharma-dv5ke Жыл бұрын
It’s done by calculating the derivatives of the y hats with respect to the weights, and the function done backwards in the network applying the chain rule of calculus
@gren509
@gren509 6 ай бұрын
Save yourself 8 minutes.. It's a FEEDBACK loop - FFS !
@Justme-dk7vm
@Justme-dk7vm 9 ай бұрын
ANY CHANCE TO GIVE 1000 LIKES ???😩
@vservicesvservices7095
@vservicesvservices7095 6 ай бұрын
Try to use more unexplained terminology to explain the terminology you try to explain is the source of confusion. 😂 Thumb down.
@tsvigo11_70
@tsvigo11_70 6 ай бұрын
A neural network cannot be connected by weights, this is nonsense. It can be connected by synapses, that is, by resistances. The way the network learns is incredibly tricky: not only does it have to remember the correct result, which is not easy in itself, but it has to continue to remember the correct result while remembering a new correct result. This is what distinguishes a neural network from a fishing net.
Шлеменко - ЧЕСТНО о поражении Шары Буллета
11:23
Александр Лютиков
Рет қаралды 74 М.
Lecture 3 | Loss Functions and Optimization
1:14:40
Stanford University School of Engineering
Рет қаралды 904 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
Backpropagation, step-by-step | DL3
12:47
3Blue1Brown
Рет қаралды 4,9 МЛН
Gradient Descent Explained
7:05
IBM Technology
Рет қаралды 85 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 587 М.
Backpropagation : Data Science Concepts
19:29
ritvikmath
Рет қаралды 43 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 445 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,6 МЛН
How might LLMs store facts | DL7
22:43
3Blue1Brown
Рет қаралды 1 МЛН
Recurrent Neural Networks (RNNs), Clearly Explained!!!
16:37
StatQuest with Josh Starmer
Рет қаралды 653 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
How I'd learn ML in 2025 (if I could start over)
16:24
Boris Meinardus
Рет қаралды 208 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН