Excellent explanation my friend . I loved it .I am CSE professor of age 61 years . May god bless you .Please provide vedios like this so that many of student community can learn. May godess saraswathi bless you for your bright future.
@ThinkXAcademy3 жыл бұрын
Thank you so much for such kind words i will keep creating more videos to help student community ✅😄
@ninasirsi2340 Жыл бұрын
Sir at age of 61 what will you do by learning perception are you going to teach them in your college?
@WxK_Riku10 ай бұрын
@@ninasirsi2340 he said hes a Cse PROFESSOR
@richardnorthiii337410 ай бұрын
Finally a clear explanation. Thank you.
@ajaypavushetti87878 ай бұрын
😂
@DanTheTan4 ай бұрын
Thank, helped alot! Couldn't understand a symbole before this, now I feel like im on the right track!
@MlokothoLanga83 ай бұрын
Very nice and simple explanation, thanks a lot
@shashwatgandhi48952 жыл бұрын
Correct me if I am wrong. The weight changing algorithm is increasing the weights if the target value is higher than the actual value and vice versa. That will make the output in the next iteration closer to the target output BUT it will not be that way if suppose the inputs (xi) are say always negative. In that case the weight changing has to be modified that is if the input was negative then the sign of delta weight should be reversed. Eg. : x1 = -1, w1 = 1 -> x1*w1 = -1 (actual output) , sigmoid(-1) -> 0 target output = 1 delta weight = (n) * (1) Assuming n to be 0.1 then delta weight = 0.1 So the new weight becomes -> w1 = w1 + delta weight w1 = 1.1 But now running it again we see=> x1 = -1, w1 = 1.1 -> x1*w1 = -1.1 (actual output) sigmoid(-1.1) -> 0 This makes the algorithm even worse now. So we should have made sure that as the input was negative rather than adding the delta weight we should have subtracted it. so w1 = w1 - delta weight = 1 - 0.1 => 0.9 x1 = -1 , w1 = 0.9 => x1*w1 = -0.9 sigmoid(-0.9) = 0
@ThinkXAcademy2 жыл бұрын
yes for negative weights we need to handle that case
@riki24043 жыл бұрын
simply amazing explanation . Thanks a lot.
@ThinkXAcademy3 жыл бұрын
Thanks for support✔️These comments make my day😄
@mauryaashish1865 Жыл бұрын
Your way of explanation is so simple and organized that any one can understand. I enjoyed learning Perceptron, you are amazing educator. Thank you for such content. :)
@anvayawalgaonkar41193 жыл бұрын
Explained in a very easy way..please share the basics of perceptron on jupyter notebook like real hands on experience
@ThinkXAcademy3 жыл бұрын
Will work on it 😄
@rangeenbilla Жыл бұрын
finally understood after hoping so many videos. W!
@jeremyyd1258 Жыл бұрын
Thank you SO much for such a clear explanation, with the visuals to support it. I really appreciate it!
@bubiubcyufg-zc4ui7 ай бұрын
it was super useful for me thank you my friend!
@bhavikprajapati2614 Жыл бұрын
How can a set of data be classified using a simple perceptron? Using a simple perceptron with weights w0, w1 , and w2 as −1, 2, and 1, respectively, classify data points (3,4); (5, 2); (1, −3); (−8, −3); (−3, 0).
@harshchindarkar58873 жыл бұрын
Thanks man now concept is cleared for me...
@ThinkXAcademy3 жыл бұрын
Great👍🏻Please share our videos to help this channel grow🌟
@srimannarayanaiyengar89143 жыл бұрын
please post Multilayer perception model with an example
@ThinkXAcademy3 жыл бұрын
Sure sir
@csadhi2 жыл бұрын
I have gone through few videos about the topic and did not get the clear understanding. But your video was very clear and examples were very simple to understand, great job and keep up the good job. A big thanks for explaining things clearly.
@ThinkXAcademy2 жыл бұрын
Thanks😀 Do share my videos with other students to help this channel grow🌟
@slainiae Жыл бұрын
Excellent explanation👍
@shashwatgandhi48952 жыл бұрын
Wouldn't all the weights be equal after all the iteration as the delta we are adding to each of the weight is always the same for all in any iteration. (Assuming the weights were same at the start) ?
@dragster100 Жыл бұрын
I think the error term of (yi - yi bar) takes care of that. As the iterations go your error term will also becomes smaller and smaller until it converges eventually.
@vamsipaidupalli79043 жыл бұрын
Nice 👌 keep it up sir
@ThinkXAcademy3 жыл бұрын
Keep Learning 💯
@veeraprathap57745 ай бұрын
I have a question: Does the perceptron use a sigmoid function as I know. Perceptron is using the step function. Logistic Regression uses the step function.If I am wrong correct me.
@annesarahC1374 ай бұрын
Very clear.
@Babygirl_S2 жыл бұрын
This was so good! Thank you very much.
@ThinkXAcademy2 жыл бұрын
Share and Like💯
@sherlockholmes27522 жыл бұрын
Very good explanation!!
@ThinkXAcademy2 жыл бұрын
Thanks😄 Share our videos to help this channel grow💯
@dashsingh300953 жыл бұрын
Very well explained 😀😀
@ThinkXAcademy3 жыл бұрын
Thanks..please share my videos to help me grow😄
@amitblizer4567 Жыл бұрын
Very clearly explained video!, thank you!
@kumarsourabh58623 жыл бұрын
very nice explanation ..thank you
@ThinkXAcademy3 жыл бұрын
Like and share our content to support us😄
@basab4797 Жыл бұрын
Really awesome
@lowLevelCoder995 ай бұрын
Sir where is your tutorial on Activation functions?
@ROBERTAGAROFANO-j3b Жыл бұрын
exellent!!
@samarthagarwal69297 ай бұрын
you forgot to multiply xi in the formula for calculating new weight.
@simrakhan63466 күн бұрын
yes, i noticed that too.
@fiilixwonder76752 жыл бұрын
Thank you 👍
@ThinkXAcademy2 жыл бұрын
Share and Subscribe😄
@amnshumansunil33713 жыл бұрын
dude you're amazing!! keep up the good job :)
@ThinkXAcademy3 жыл бұрын
Thank you😀 Please share to help my channel reach to more students✌🏻
@kabirbaghel88352 жыл бұрын
amazing 10/10
@ThinkXAcademy2 жыл бұрын
Share and Subscribe 😃
@chamithdilshan35472 жыл бұрын
Thank you!
@ThinkXAcademy2 жыл бұрын
Welcome 🌟 Share it with other students also👍
@mrkhan31883 жыл бұрын
Thanks dude .... I have exam tmrw
@ThinkXAcademy3 жыл бұрын
Best of luck bro💯
@praveenchristopher77762 жыл бұрын
Thankyou for the very clear explanation, it was was a pleasure to learn. I have a question on the activation function, x.w+b, since we are using a squashing function should it not be x.w+b < 0.5 for 0, and x.w+b > 0.5 for it to be classified as 1. Thanks again
@ThinkXAcademy2 жыл бұрын
No, I have rechecked the conditions, it is correct in the video.