If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
@arpit7433 жыл бұрын
Excellent video! .Bro why do we have multiple nuerons in every hidden layer. is it from the point of view of introducing non linearity?
@MachineLearningWithJay3 жыл бұрын
@@arpit743 Yes, but not entirely. Multiple neurons allow us to capture complicated patterns. A single neuron won’t be able to capture complicated patterns from the dataset.
@arpit7433 жыл бұрын
@@MachineLearningWithJay thanks alot! but why is that it allows for complicated boundaries?
@Sigma_Hub_012 жыл бұрын
@@arpit743 more refined outputs will allow to see the limitations to your network boundaries...and hence u can pinpoint at exact location and correct it as per your needs. It doesn't allows for complicated boundaries , u are ALLOWED to see your complicated boundaries,and hence work thru it
@debanjannanda2081Ай бұрын
Sir, their is a mistake on timestamp 0.41 where a2[1] is wrong , A2[2] is the activation function you written second becouse the value you multiply with the weight in weighted sum those value of a is for first hidden layer which use to find the second hidden layer value A2[2] not a2[1]. But, you really teach great. thank you...
@coverquick4902 жыл бұрын
I've always felt as if I was on the cusp of understanding neural nets but this video brought me past the hump and explained it perfectly! Thank you so much!
@MachineLearningWithJay2 жыл бұрын
I am really elated hearing this. Glad if helped you out. Thank you so much for your appreciation. 🙂
@kheireddine78892 жыл бұрын
this video should be titled " Explain - Forward and Backward Propagation - to Me Like I'm Five. Thanks man you saved me a lot of time.
@MachineLearningWithJay2 жыл бұрын
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
@MachineLearningWithJay2 жыл бұрын
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
@PrithaMajumder3 ай бұрын
Thanks a lot for This Amazing Introductory Lecture 😊 Lecture - 2 Completed from This Neural Network Playlist
@saumyaagrawal77812 ай бұрын
This was more helpful than my lectures!
@MachineLearningWithJayАй бұрын
Glad to help!!
@Amyx11 Жыл бұрын
Literally best. Crisp and clear!! Thank you
@farabiislam2418 Жыл бұрын
You explain better than popular course instructor on deep learning
@MachineLearningWithJay Жыл бұрын
Thanks for the compliment 😇
@sajan2980 Жыл бұрын
I am sure he is talking about Andrew Ng Lol. His explanation on that video is too detailed and the notations are too confusing lol. But the same explanation in his Machine Learning Specialization course is much better.
@whoooare203 жыл бұрын
you explained in very clear and easy ways. Thank you, this is so helpful!
@MachineLearningWithJay3 жыл бұрын
Your welcome!
@kunalbahirat77952 жыл бұрын
best video on youtube for this topic
@MachineLearningWithJay2 жыл бұрын
Thank you so much. Much appreciate your comment! 🙂
@social.21847 ай бұрын
Very informatics video.Explained all the terms in a simple manner.Thanks alot
@sushantregmi21262 жыл бұрын
so glad I found this channel!!
@MachineLearningWithJay2 жыл бұрын
Thank you! I appreciate your support 😇
@nooreldali7432 Жыл бұрын
Best explanation I've seen so far
@venompubgmobile72183 жыл бұрын
Im a bit confuse through the exponent notations since some of it were not corresponding to the other
@kenjopac42472 жыл бұрын
This was actually pretty straight forward
@MachineLearningWithJay2 жыл бұрын
Glad if it helped you!
@nishigandhasatav3559 Жыл бұрын
Absolutely loved the way you explain. So easy to understand. Thank you
@petchiammala14302 жыл бұрын
Super sir. I have learned more information from this and also calculation way. It's very useful to our study. Thank you sir
@MachineLearningWithJay2 жыл бұрын
Happy to help!
@harshwardhankurale3103 ай бұрын
Top Class Explanation!
@MachineLearningWithJay3 ай бұрын
Glad it was helpful!
@ahmeterdonmez91952 ай бұрын
at 0:58 in a1[1] = activation(....), last sum should be W13[1]*a3[0] not W13[1]*a3[1]
@AryanSingh-eq2jv Жыл бұрын
best explanation, best playlists I don't usually interact with the algorithm much by giving likes and dropping comments or liking but you beat me into submission with this. Hopefully I understand the rest of it too lol.
@VC-dm7jp2 жыл бұрын
Such a simple and neat explanation.
@MachineLearningWithJay2 жыл бұрын
Thank you!
@rawanmohammed55523 жыл бұрын
You are great. It will be very good if you continue.
@MachineLearningWithJay3 жыл бұрын
Thank you for your support! I will surely continue making more videos.
@bincybincy15 ай бұрын
This is so well explained.. thankyou
@johnalvinm Жыл бұрын
Very helpful and to the point and correct!
@DAYYAN2942 жыл бұрын
Excellent explanation jazakallah bro
@maximillian73102 жыл бұрын
Thanks man. The slides were amazingly put up.
@MachineLearningWithJay2 жыл бұрын
Thank you so much!
@michaelzheng951 Жыл бұрын
Fantastic explanation. Thank you
@alpstech4 ай бұрын
You drop something ... 👑
@MachineLearningWithJay3 ай бұрын
haha.. what is it? Thanks btw
@Swarnajit_Saha Жыл бұрын
Your videos are very helpful. It will be great if you sort the video..Thank you😇😇😇
@mdtufajjalhossain12463 жыл бұрын
you are really awesome. love your teaching ability
@MachineLearningWithJay3 жыл бұрын
Thank you so much !
@mdtufajjalhossain12463 жыл бұрын
@@MachineLearningWithJay, you are most welcome bro. Please make the implementation of Multiclass Logistics Regression using OnevsAll/OnevsOne method
@MachineLearningWithJay3 жыл бұрын
@@mdtufajjalhossain1246 Okay! Thanks for suggesting!
@chandanpramanik4399 Жыл бұрын
Nicely explained. Keep up the good job!
@ishayatfardin7 Жыл бұрын
Brother your explanation was great but there are some mistakes i have pointed out.
@muhammadrabbanizainalabidi24092 жыл бұрын
Good Explanation !!
@MachineLearningWithJay2 жыл бұрын
Thank you!
@omarsheetan44173 жыл бұрын
Great video, and great explanation thanks dude!
@MachineLearningWithJay3 жыл бұрын
Your Welcome!
@waleedrafi15093 жыл бұрын
great video, Please also make a video on SVM as soon as possible
@MachineLearningWithJay3 жыл бұрын
Okay Sure ! Thank you so much for your suggestion. I have been asked alot, to make video on SVM. So, I will try to make it just after finishing this Neural Network playlist .
@marcoss2ful Жыл бұрын
where did came from the algorithm that calculates the next W in 5:30 ? I know it is intuitive, but does it have something to do about Euler's method ? Or another one ? Thank you so much for these incredible videos
@sabeehamehtab69543 жыл бұрын
Awesome, really helpful! Thank you
@MachineLearningWithJay3 жыл бұрын
Your welcome!
@PrinceKumar-el7ob3 жыл бұрын
thank u sir it was really helpful
@MachineLearningWithJay3 жыл бұрын
Your welcome!
@agrimgupta32213 жыл бұрын
Your videos on neural networks are really good. Can you please also upload videos for generalized neural networks too, that would really be helpful P.S Keep Up the good work!!!
@MachineLearningWithJay3 жыл бұрын
Thank you so much for your feedback. I will surely consider making videos on generalized neural networks.
@testyourluck3914 Жыл бұрын
B1 and B2 are initialized randomly too ?
@babaabba93483 жыл бұрын
great video as always
@MachineLearningWithJay3 жыл бұрын
Thank You soo much !!!
@taranerafati9730 Жыл бұрын
great video
@kewtomrao3 жыл бұрын
Isnt the equation : Z= W.X+B = transpose(W)*X + B.Hence the weight matrix what you have given is wrong right?
@MachineLearningWithJay3 жыл бұрын
Hi... I have taken the shape of W as (n_h, n_x). Thus equation will be Z = W.X + B. But if you take W as (n_x, n_h), then equation of Z = transpose(W).X + B. Both represent the same thing. Hope it helps you.
@kewtomrao3 жыл бұрын
@@MachineLearningWithJay thanks for the quick clarification.makes sense now.keep up the great work!!
@ibrahimahmethan5862 жыл бұрын
Good job. But Gradient descent W2 and W1 mus be updated simultaneously.
@MachineLearningWithJay2 жыл бұрын
Thank you! Yes they should be updated simultaneously.
@iZeyad952 жыл бұрын
Amazing work, keep it going :)
@MachineLearningWithJay2 жыл бұрын
Thank You!
@premkumarsr40217 ай бұрын
Super Bro❤❤❤❤
@vipingautam95012 жыл бұрын
Small doubt, what is f(z1)...I am assuming these are just different type of activation functions...where input is just the weight of current layer*input from previous layers...is that correct?
@MachineLearningWithJay2 жыл бұрын
Yes correct… but do check out the equations properly. It has bias also.
@vipingautam95012 жыл бұрын
@@MachineLearningWithJay Thanks for your prompt response.
@gautamthulasiraman182 жыл бұрын
Sir it's W ¹¹[¹] * a⁰[1] right? You've done it as W ¹¹[¹] * a¹[1] at the matrix multiplication, can you just verify I'm wrong?
@MachineLearningWithJay2 жыл бұрын
Yes… there is a typo error
@faisaljan38842 жыл бұрын
what is this B1
@nothing59873 жыл бұрын
hi can you put caption option
@MachineLearningWithJay3 жыл бұрын
Hi.. somehow captions were not generated in this video. All my ohter videos do have caption. I will change the settings to bring caption in this video as well. Thanks for bringing this to my attention.
@xinli36422 жыл бұрын
Can A* actually be Z*, e.g. A1 = Z1?
@MachineLearningWithJay2 жыл бұрын
No, we need to apply a non-linear activation function. So A1 must be = some_non_linear_function(Z1)
@fadhliana3 жыл бұрын
hi, how to calculate the cost?
@MachineLearningWithJay3 жыл бұрын
You will get all the information in upcoming videos that I have already uploaded in this series. If you still have questions, then you can write me mail on : codeboosterjp@gmail.com
@gitasaheru23862 жыл бұрын
Please share code algorithm backpropagate
@abdallahlakkis449 Жыл бұрын
y no subtitles?
@priyanshshankhdhar1910 Жыл бұрын
wait you haven't explained backpropagation at all
@beypazariofficial Жыл бұрын
let bro cook
@UmerMehmood-n3f2 ай бұрын
Extremely confusing tutorial and there's a mistake This should be : A[3]⁰ not A[3]¹