#1 Solved Example Back Propagation Algorithm Multi-Layer Perceptron Network by Dr. Mahesh Huddar

  Рет қаралды 782,337

Mahesh Huddar

Mahesh Huddar

Күн бұрын

Пікірлер: 226
@ervincsengeri1840
@ervincsengeri1840 Жыл бұрын
Köszönjük!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@akinyaman
@akinyaman Жыл бұрын
man l looked many bp explanations and most of them telling of rocket science stuff this is the easy and clear one ever, thanx for sharing
@safayetsuzan1951
@safayetsuzan1951 2 жыл бұрын
Sir, you deserve a big thanks. My teacher gave me an assignment and i was searching for 2 days on youtube for weight calculation. But finally your video has done the work. It was really satisfying. Thank you sir.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@joeystenbeck6697
@joeystenbeck6697 2 жыл бұрын
This makes the math very clear. I now know the math and have some intuition, so I hope to fully connect the two soon. Thanks for the great video!
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@shivamanand8019
@shivamanand8019 Жыл бұрын
It looks like error in ∆wji notation you followed just opposite @@MaheshHuddar
@jarveyjaguar4395
@jarveyjaguar4395 2 жыл бұрын
I have an Exam in 2 days and your videos just saved me from failing this module. Thank you so much and much love from 🇩🇿🇩🇿.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@tejasvinnarayan2887
@tejasvinnarayan2887 2 жыл бұрын
Very clear! How about bias b? What is the formula in case we add a bias?
@horiashahzadi302
@horiashahzadi302 4 ай бұрын
Boht zbrdast lecture ha ....boht shukrya....keep it up
@abu-yousuf
@abu-yousuf 2 жыл бұрын
Great work Dr. Mahesh. Thanks from Pakistan.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@alperari9496
@alperari9496 10 ай бұрын
I believe for hidden units, the w_kj in delta(j) formula should have been w_jk. Namely, other way around.
@alperari9496
@alperari9496 10 ай бұрын
And delta(w_ji) should have been delta(w_ij), again the other way around.
@steveg906
@steveg906 9 ай бұрын
yeh i was about to comment the same thing
@SumOfBody
@SumOfBody 8 күн бұрын
And the aj calculations conflict with the formula posted. Either its for j in wi: aj = sum(wij * xi) [what the equation is] or it's for i in w: aj = sum(wij * xi) [what the calculations are] Pretty sure the formula is correct but the calculations are wrong.
@R41Ryan
@R41Ryan Жыл бұрын
Thank you. I've been trying to implement a reinforcment algorithm from scratch. I understood everything except back propogation and every video on it that I've watched has always been vague until I saw this video. Good stuff!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@BloodX03
@BloodX03 Жыл бұрын
Damn sir, u are the best youtube teacher in AI.. Love youu ma sir
@shreyaankrao968
@shreyaankrao968 Жыл бұрын
I got crystal clear understanding of this concept only because of you sir. The flow of video is excellant, appreciate your efforts!! Thank you and keep up the good work !!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@khizarkhan2250
@khizarkhan2250 Жыл бұрын
Guys! Threshold or bias is a tunning parameter you can select something low like 0.01 , 0.02 or high like 0.2 to check error is getting low or not. I hope this will help you.
@ToriliaShine
@ToriliaShine 10 ай бұрын
thank you! understood the concept smoothly with your video!
@MaheshHuddar
@MaheshHuddar 10 ай бұрын
Thank You Do like share and subscribe
@ToriliaShine
@ToriliaShine 10 ай бұрын
@@MaheshHuddar did that, question though. for the forward pass, what about biases? they are values with their own weights right. were they just, not included for this example?
@ToriliaShine
@ToriliaShine 10 ай бұрын
ah nevermind, got it from your next video in this series lol
@paul-pp1op
@paul-pp1op 2 жыл бұрын
Best video explanation on ANN back propagation. Many thanks sir
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@brandoncazares8452
@brandoncazares8452 Ай бұрын
Thank you very much, sir. This helped me understand better.
@kronten3662
@kronten3662 Ай бұрын
Thank you sir paper mai 10 marks ka pura Ane ke liye
@satriohalim7213
@satriohalim7213 28 күн бұрын
Thank you so much sir for this informative explanations. Lot of respect from me
@alibabarahaei2229
@alibabarahaei2229 Күн бұрын
Perfect❤
@vnbalaji7225
@vnbalaji7225 2 жыл бұрын
Simple lucid example illustrated. Please continue.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You
@web3sorcerer
@web3sorcerer 4 ай бұрын
this is a great lecture!
@ushanandhini1942
@ushanandhini1942 2 жыл бұрын
Easy to understand. Give next one is CNN
@NotLonely_Traveler
@NotLonely_Traveler 9 ай бұрын
Finally one that makes sense
@MaheshHuddar
@MaheshHuddar 9 ай бұрын
Thank You Do like share and subscribe
@Ayesha_01257
@Ayesha_01257 11 ай бұрын
Very Well Explained ...Keep up the Good Work
@MaheshHuddar
@MaheshHuddar 11 ай бұрын
Thank You Do like share and subscribe
@hendsaleh7428
@hendsaleh7428 Ай бұрын
Thank you so much 💓 U r the best 👌
@sharanyas1565
@sharanyas1565 2 жыл бұрын
Very clear. Thanks for uploading this video.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@mdnahidulislam13
@mdnahidulislam13 8 ай бұрын
Clear explanation. recommanded...
@MaheshHuddar
@MaheshHuddar 8 ай бұрын
Thank You Do like share and subscribe
@sahmad120967
@sahmad120967 Жыл бұрын
Great sir, it is very clear example how to calculate ANN. Thanks, keep being productive
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@25fpslagger81
@25fpslagger81 8 ай бұрын
Thank you sir, all of you machine learning videos have helped us students a lot
@MaheshHuddar
@MaheshHuddar 8 ай бұрын
Welcome Do like share and subscribe
@hemanthd623
@hemanthd623 8 ай бұрын
@@MaheshHuddar Sir Can you solve problems on HMM , CNN pls
@msatyabhaskarasrinivasacha5874
@msatyabhaskarasrinivasacha5874 8 ай бұрын
It'd an awesome explanation sir....no words to thank you sir
@MaheshHuddar
@MaheshHuddar 8 ай бұрын
You are most welcome Do like share and subscribe
@lalladiva6097
@lalladiva6097 Жыл бұрын
you are a life saver, thank you soooo much.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@df_iulia_estera
@df_iulia_estera Жыл бұрын
Awesome explanation! Thanks a lot!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@alizainsaleem5533
@alizainsaleem5533 5 ай бұрын
best explanation
@pritam-kunduu
@pritam-kunduu Жыл бұрын
You taught Very good. Today is my exam. Your videos were really helpful. I hope I pass well without getting a backlog in this subject. 👍👍
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@AbhishekSingh-up4rv
@AbhishekSingh-up4rv 2 жыл бұрын
Awesome explanation. Thanks
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@srisangeeth4131
@srisangeeth4131 8 ай бұрын
Concept is clear , i got confidence in this concept sir,thank you👍👍👍👍
@MaheshHuddar
@MaheshHuddar 8 ай бұрын
Welcome Do like share and subscribe
@srisangeeth4131
@srisangeeth4131 8 ай бұрын
@@MaheshHuddar sir can you provide videos in gaussian process in machine learning
@TigistAyeleHabte
@TigistAyeleHabte 2 ай бұрын
really helpfull thank you
@sahanazbegam6913
@sahanazbegam6913 7 ай бұрын
very clear thank you for the content
@MaheshHuddar
@MaheshHuddar 7 ай бұрын
Welcome Do like share and subscribe
@arminmow
@arminmow 2 жыл бұрын
You saved me you're a hero thank you
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@iwantpeace6535
@iwantpeace6535 10 ай бұрын
THANK YOU SIR , BIRILIANT INDIAN MIND
@MaheshHuddar
@MaheshHuddar 10 ай бұрын
Welcome Do like share and subscribe
@MissPiggyM976
@MissPiggyM976 Жыл бұрын
Very useful, thanks!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@a5a5aa37
@a5a5aa37 Жыл бұрын
thanks a lot for your explanation!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@user-fl7bm8jc8o
@user-fl7bm8jc8o Жыл бұрын
Thanks a lot sir 🙏🙏🙏🙏🙏🙏🙏
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Most welcome Do like share and subscribe
@femiwilliam1830
@femiwilliam1830 Жыл бұрын
Good job, but why no account for the bias term before applying sigmoid function?
@kanikavinayak7586
@kanikavinayak7586 3 ай бұрын
love from tiet 👏👏
@tanvirahmed552
@tanvirahmed552 2 жыл бұрын
Nice video, easily understood the topic, thank you
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@flakysob
@flakysob Жыл бұрын
Thank you so much! You saved me. I subscribed. Thanks
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Please do like share
@tatendamachikiche5535
@tatendamachikiche5535 2 жыл бұрын
good staff thank u
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@romankyrkalo9633
@romankyrkalo9633 2 жыл бұрын
Great video, easy to understand
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@ajayofficial3706
@ajayofficial3706 Жыл бұрын
Thank you Sir, for each and every point explain it.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and Subscribe
@adilmughal2251
@adilmughal2251 Жыл бұрын
Amazing stuff just to the point and clear.
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@AnubhavApurva
@AnubhavApurva 2 жыл бұрын
Thank you!
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@sathviksrikanth7362
@sathviksrikanth7362 Жыл бұрын
Thanks a lot Sir!!!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@bagusk_awan
@bagusk_awan Жыл бұрын
Sir, thank you for this great video! It was really helpful. I appreciate the clear explanation
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Glad it was helpful! Do like share and subscribe
@mr.commonsense6645
@mr.commonsense6645 Жыл бұрын
GOATED Explaning, mantab mantab
@usmanyousaaf
@usmanyousaaf 9 ай бұрын
Today is my exam again well explain sir
@iqramunir1468
@iqramunir1468 Жыл бұрын
Thank you so much sir
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@utkarshmangal6559
@utkarshmangal6559 2 жыл бұрын
you are a king sir. Thank you for saving me from my exam tommorow.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe All the very best for your exams
@jacki8726
@jacki8726 Жыл бұрын
Very helpful
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Do like share and subscribe
@usmanyousaaf
@usmanyousaaf 2 жыл бұрын
Last night !! Today is exam well explain boss
@TXS-xt6vj
@TXS-xt6vj Жыл бұрын
you are a legend
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@umakrishnamarineni3520
@umakrishnamarineni3520 2 жыл бұрын
thank you sir.
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Most welcome Do like share and subscribe
@shubhampamecha9650
@shubhampamecha9650 2 жыл бұрын
And where is the b biased There should be some constant also?
@allahthemostmerciful2706
@allahthemostmerciful2706 Жыл бұрын
Soooooo Good❤
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@tlbtlb3950
@tlbtlb3950 Жыл бұрын
不错,老印!
@arj1045
@arj1045 Жыл бұрын
well done sir
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@jinkagamura2820
@jinkagamura2820 2 жыл бұрын
I have a doubt. In many places, I have seen that the error calculation is done using the formula E = 1/2 (y - y*)^2, but you have calculated by using subtraction. Which is correct?
@keertichauhan6221
@keertichauhan6221 2 жыл бұрын
same doubt .. what is the correct method?
@priyanshumohanty5261
@priyanshumohanty5261 2 жыл бұрын
@@keertichauhan6221 I think there are different methods to compute the error. The one that this guy above has mentioned is mean square error. The one shown in the video is also correct, but using MSE or RMSE, are generally regarded as better measures.
@chukwuemekaomerenna4396
@chukwuemekaomerenna4396 2 жыл бұрын
For a multiple data point, you use error function and for single output you make use of loss function. Loss function is error = actual - target Error function is 1/2(actual-target)square
@chukwuemekaomerenna4396
@chukwuemekaomerenna4396 2 жыл бұрын
For a multiple data point, you use error function. Error function=1/2 summation(y actual - y target)^2. For a single data point, you use the loss function. Loss function= (y actual - y target)
@sharanyas1565
@sharanyas1565 2 жыл бұрын
The errors are made to be positive by squaring.
@bharatreddy972
@bharatreddy972 2 жыл бұрын
Thank you So much Sir.....This videos made us clear understanding of Machine learning all concepts...
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@AmnaCode
@AmnaCode 8 ай бұрын
Thanks for solution
@MaheshHuddar
@MaheshHuddar 8 ай бұрын
Welcome Do like share and subscribe
@AmnaCode
@AmnaCode 8 ай бұрын
@@MaheshHuddar sure. Thanks 😊
@madhusaggi
@madhusaggi Жыл бұрын
Can you please do vedios on Cnn with mathematical concepts..your vedios are much useful and understandable.Thank you
@jeyak9719
@jeyak9719 Ай бұрын
DID YOU SKIP THE BIAS WHILE TEACH
@bhuvanareedy
@bhuvanareedy Ай бұрын
Is this suitable for nueral networks
@mehmetakifvardar
@mehmetakifvardar Жыл бұрын
Mr.Huddar, thanks a lot for perfect explanation. One thing though, how do I calculate the change of bias term for each neuron in my neural network?
@SaiNath-cw7yn
@SaiNath-cw7yn 7 ай бұрын
thank u sir
@MaheshHuddar
@MaheshHuddar 7 ай бұрын
Welcome Do like share and subscribe
@dydufjfbfnddhbxnddj83
@dydufjfbfnddhbxnddj83 2 жыл бұрын
how did you update the weights of connections connecting input layer and the hidden layer?
@tuccecintuglu404
@tuccecintuglu404 Жыл бұрын
YOU ARE THE GBEST
@SHUBHAMKUMAR-cd7fs
@SHUBHAMKUMAR-cd7fs 8 ай бұрын
awesome.
@MaheshHuddar
@MaheshHuddar 8 ай бұрын
Thanks! Do like share and subscribe
@believer-n3t
@believer-n3t 18 күн бұрын
sir while making neuron what should be our initial weight. how to decide that
@ram-pc4wk
@ram-pc4wk Жыл бұрын
how are you deriving deltaj formula, u can include derivations of sigmoid functions.
@pranavgangapurkar195
@pranavgangapurkar195 Жыл бұрын
in one epoch how many times back propagation takes place?
@makkingeverything6610
@makkingeverything6610 2 жыл бұрын
thank you man
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@imranimmu4714
@imranimmu4714 2 жыл бұрын
thank you
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Welcome Do like share and subscribe
@priyaprabhu7101
@priyaprabhu7101 2 жыл бұрын
Nice Video sir.. where is the bias here
@지엔서
@지엔서 Жыл бұрын
good video
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@ayushhmalikk
@ayushhmalikk 2 жыл бұрын
youre a legend
@MaheshHuddar
@MaheshHuddar 2 жыл бұрын
Thank You Do like share and subscribe
@LongZzz
@LongZzz Жыл бұрын
Thank you so much, this video comes right after I feel bad at machine learning and wanna give up but now I think it is not so hard as I think
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@Gaurav-z6l1c
@Gaurav-z6l1c 3 ай бұрын
i just wanted to know that these calculation equation for weight are same for all neural network or it is for the specific one?
@oposicionine4074
@oposicionine4074 Жыл бұрын
How do you update de weights if you have more input data?? In this case he only has 1 input, how do you do it with 2 inputs? Do you do the same twice?
@animeclub8475
@animeclub8475 11 ай бұрын
everyone says "thank you", but only a few understand that this video is useless if there are more neurons on one layer. Those who say "thank you" do not even plan to make a neural network
@ishrarchowdhury4850
@ishrarchowdhury4850 3 ай бұрын
check the algorithm. you take 1 input, then update the weights. then you take the next one. in this way you continue till all the data points. you do this epoch number of times.
@AmarAmar-q8s
@AmarAmar-q8s 27 күн бұрын
Hey man, how to reach out you?
@sowmiya_rocker
@sowmiya_rocker Жыл бұрын
Thanks for the video, sir. I have a doubt. How did you update weights without Gradient Descent (GD) or any other optimization technique, sir? Because I read in blogs that Networks don't get trained without GD and by only using backpropagation. In other words, my doubt is how does the calculation change if we also implemented GD in this? I'm a rookie; kindly guide me, sir.
@adityachalla7677
@adityachalla7677 Жыл бұрын
gradient descent has been used in this video while updating the weights. the change in weights is done through gradient descent. But here he has not written the derivative math.
@rohanwarghade7111
@rohanwarghade7111 Жыл бұрын
how he got ytarget = 0.5@@adityachalla7677
@lakshsinghania
@lakshsinghania Жыл бұрын
sir, also each perceptron has bias with it right ?
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Yes
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Follow this video for bias: kzbin.info/www/bejne/pGOvYn1rf76ai80
@lakshsinghania
@lakshsinghania Жыл бұрын
Thank u sir a lot!! @@MaheshHuddar
@pubuduchanna1736
@pubuduchanna1736 Жыл бұрын
Thank you! This helped me a lot!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Welcome Do like share and subscribe
@justenjoy3744
@justenjoy3744 Жыл бұрын
From which book u have taken problem from?
@amulyadevireddy5669
@amulyadevireddy5669 2 жыл бұрын
What about the bias factor??
@halihammer
@halihammer 2 жыл бұрын
He did not add it to keep it simple i guess. But you can add the bias by making it a third input. Then the method doesnt change you just have a new (constant) input for each layer. Normaly set to one 1 and the weight associated will act as your bias. since 1*biasWeight = biasWeight. I just added 1 to my input vector and generate an additionol weight for the weights matrix. But im also just learning an not sure if im 100% correct..
@JudyXu-d6j
@JudyXu-d6j Жыл бұрын
I have a question, what if we have a bias term and some bias weights. Do we need to account for those or it would be 0?
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Yes you have to consider Follow this video: kzbin.info/www/bejne/pGOvYn1rf76ai80
@satwik4823
@satwik4823 Жыл бұрын
GODDD!!!!!!!!!!!!!!!!!!!
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Thank You Do like share and subscribe
@pritampatil4669
@pritampatil4669 2 жыл бұрын
What about the bias terms ?
@seabiscuitthechallenger6899
@seabiscuitthechallenger6899 6 ай бұрын
👍👍👍👍👍👍👍👍
@MaheshHuddar
@MaheshHuddar 6 ай бұрын
Thank You Do like share and subscribe
@kaavyashree6209
@kaavyashree6209 Жыл бұрын
Sir how to update bias in back propagation
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Refer this video: kzbin.info/www/bejne/pGOvYn1rf76ai80
@Professor_el
@Professor_el 9 ай бұрын
The formulars for delta W only work because of the nature of the actiation function right? if it is a hyperbolic tangent or RELU , the formulas change right?
@steveg906
@steveg906 9 ай бұрын
yes
@muhtasirimran
@muhtasirimran Жыл бұрын
I have a confusion. We use ReLU on hidden layer and not sigmoid. Shouldn't we calculate hidden layer's activation using ReLU instead of sigmoid?
@MaheshHuddar
@MaheshHuddar Жыл бұрын
Yes you have to Do calculation Based on activation function
@muhtasirimran
@muhtasirimran Жыл бұрын
@Mahesh Huddar ik. You have used sigmoid function on the hidden layer. This will result in an error.
Back Propagation in Neural Network with an example
12:45
Naveen Kumar
Рет қаралды 863 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Is AI About to Surpass Us ? The Future of Superintelligence !
5:44
QuickLearn: AI Makes You Smarter
Рет қаралды 2
#28 Back Propagation Algorithm With Example Part-1 |ML|
13:46
Trouble- Free
Рет қаралды 409 М.
Attention in transformers, step-by-step | DL6
26:10
3Blue1Brown
Рет қаралды 2 МЛН
How I'd learn ML in 2025 (if I could start over)
16:24
Boris Meinardus
Рет қаралды 139 М.
Backpropagation, step-by-step | DL3
12:47
3Blue1Brown
Рет қаралды 4,8 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 356 М.
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН