I'm glad someone like you decided to make a video on this. I have found that many find SVM hard to grasp because they dive directly into the code without understanding the intuition behind it. This goes a long way in helping people out.
@mahikhan57163 жыл бұрын
a sensible tutor i have seen my life who always traces students pulse . there are lot of tutorials about svm in youtube but no other covered A to z as like as krish . appreciate u krish
@victor755702 жыл бұрын
I cannot begin to thank you enough for breaking down and simplifying the math behind the machine learning algorithms. Understanding the math under the hood is pertinent to tuning the hyperparameters. I love your videos and I'm always recommending aspiring data scientists to check out your channel.
@YouTubelesss2 жыл бұрын
man how could you remember all this... I keep forgetting the concepts after few weeks and had to watch it back to get a grasp on it. A million thanks for you in sharing your precious knowledge with us.
@myeschool21294 жыл бұрын
My Dear Teacher From my heart, I salute you, cause you to work so hard for us, your students, to teach the things with so much clarity. Praying for you. ....Noushad Rahim, Kerala
@abhijitbhandari6213 жыл бұрын
You really are too passionate about teaching sir. Sometimes you even are being breathless, that excitement of teaching..got no words. Hope you will upload videos on topics related to DNN as well. Proud to be learning from you
@akshaykhavare58982 жыл бұрын
The way you simplify things is really commendable. After reading lot of blogs and going through other resources finally landed here and it was worth it. Thank you Sir.
@Reem.alhamimiАй бұрын
At 4:43, W transpose will be a row vector. W itself is a column vector and you need to take a transpose of W to make it a row vector - that it [-1 0]. x is fine as its a column vector anyways. Your calculation of -4 for y=w(transpose)x will be correct for the row vector of W - that is transpose of the column vector W
@datahat6422 жыл бұрын
Very informative video and simple to understand. A slight oversight error, as the X here is 2-d [x1,x2], W (without b) must be 2-d as well [w1,w2]... If we consider bias b, then X = [x1 x2 1] and W is [w1 w2 b]... and in such a case we shall have a plane instead of a line
@nitinudgirkar Жыл бұрын
At 4:43, I think the W transpose will be a row vector. W itself is a column vector and you need to take a transpose of W to make it a row vector - that it [-1 0]. x is fine as its a column vector anyways. Your calculation of -4 for y=w(transpose)x will be correct for the row vector of W - that is transpose of the column vector W
@chandrimapramanick1111 Жыл бұрын
Thank you for the clearing this. While doing the math I was also confused.
@prashantkumarvishwakarma86453 жыл бұрын
Thanks Sir, For all of yours video agr aap nhi hote to kbhi v itna sikh nahi pate humare instute wale log sirf overview bata k chor diye but real knowledge to aapki video se mila..... Thank You So Much
@kirushikeshdb18853 жыл бұрын
The w matrix should be [1 1] because the line equation is x1 + x2 =0, also while computing the value of y, wT should be a dimension of 1x2 and X should be 2x1, so that you will get a single value.
@JackSparrowBoat2 жыл бұрын
Can you pls elaborate?
@prateeksingh3808 Жыл бұрын
Yes I feel the same
@rvg2964 жыл бұрын
The Regularization Parameter C is basically how much we want to avoid misclassification of points. If C is very large (infinite) we get the perfect classification of training samples smaller margin is considered, but if C is very small(0) it will cause the optimizer to find the maximum margin classifier even though it misclassified some points. Hence we have a find a good value of C in between. The Gamma parameter defines how much influence a training example has. For example, if gamma is high only the nearest points from the margin are considered for calculating distances, but if gamma is low even farther points from the margin are also considered.
@krishnaik064 жыл бұрын
U r right...thanks amazing work..
@VishalPatel-cd1hq4 жыл бұрын
hi Rohit Ci actually will be lagrange multiplier as here our optimiztion problem is constrain one we will generally denote lagrange multiplier as lambda(i).
@VishalPatel-cd1hq4 жыл бұрын
In SVM we are trying to solve our max(min) problem but by satisfying KKT (Karush-Kuhn-Tucker) condition we are making as min(max) problem and try to optimize this problem through SMO(sequential minimal optimiztion) .
@VishalPatel-cd1hq4 жыл бұрын
my bad actually there will be one more term which will be multiple with ci(w.T+b) that we will callas lagrange multiplier
@coolzkabhijit4 жыл бұрын
Can you please explain what is the role of gamma? What happens if farther or closer points are considered? How gamma affects the output?
@SumanBhartismn4 жыл бұрын
Abhi tk kahan the sir, I was searching a teacher like you in ML. Finally mission completed. Love from my side.
@usamaahmad71919 ай бұрын
Thank you so much Sir, the way you were teaching and i was getting all of your points, my love for your method and dedication starts hiking, a lot of love , respect and salute from Pakistan... knowledge have no boundries...
@sivareddynagireddy562 жыл бұрын
I saw so many articles about svms, every one say directly distance formula simply maximize,but your simplification from strach is awesome sir !!!!
@andrewwilliam22094 жыл бұрын
Hey Krish, I just want to say that your explanations are superb. I am new to Machine Learning and I took an online course about it but it barely gets into the mathematics. I understand that to get good and serious at ML we need a solid mathematical understanding of the various models, so i appreciate these videos that go in depth. To be honest I watched it the first time and didn't completely get it, but I'm going to watch it again now!
@SAURABHKUMAR-ql8wi3 жыл бұрын
It was wonderful session. I have gone first time for SVM and able to relate the mathematics very well. Thanks a lot for this session.
@nirangannirangan7175 Жыл бұрын
Excellent sir.Crystal Clear explanation.
@jbhsmeta4 жыл бұрын
Just Great !!! Wow!!! - It was a great experience. Eagerly waiting for the part 3 of SVM covering the kernel trick.
@lisa-sf7no Жыл бұрын
Thanks
@vignanvennampally68523 жыл бұрын
Watching One video of yours = reading 100 blogs. Thank you for saving our time!
@wenqichen41513 жыл бұрын
I can't wait to express my infinite appreciation for you, sir! This video is so so so intuitive and uses less advanced math!
@DhruvSharma144 жыл бұрын
Salutes to you dear teach! You are one of the best for all concepts related to Data Science!
@GujratiInGulf Жыл бұрын
THANK YOU !!!! I can not even explain how much you helped me, I was about to cry as i was not able to understand the math behind SVM and why we use Lagrangian function. I have exam after 10 days and your videos are really helping me in this time thanks once again and HAPPY GURUPURNIMA from bottom of my heart !!!
@himtyagi97403 жыл бұрын
Lots of efforts are given by you in creating this clip...You are an excellent teacher...Best wishes
@NicJd013 жыл бұрын
Bhari dada!!! Mast samjavlat!!👌👌
@chekrasena4 жыл бұрын
Krishna Anna Thoppu🙏
@rupeshsingh40123 жыл бұрын
Very very impressive explanation.. Thanks a lot. Bhagwan aapko hmesa khus or swasth rakhe....
@zinaibrahim Жыл бұрын
Krish, all I can say is thank you! the best and most comprehensive SVM lecture I've seen (and I've seen many).
@darshitsolanki73524 жыл бұрын
Kernel have sigmoid s shape graph n linear and polynomial form I m from statistics degree u have grt knowledge dude keep it up
@TriedTastedJourney3 жыл бұрын
Best video to understand the math behind SVM. Thanks a lot Sir!
@afsarullashareef35672 жыл бұрын
Krish , I'm really thankful to you .. may God bless you .
@bhabeshmali36402 жыл бұрын
You are a Gem Krish Naik.
@mayursalunke16543 жыл бұрын
by looing this video, I actually understand, How important math in real life! and also How logistics and support vector machines actually differ. Thank you Krish sir
@rajraji4174 жыл бұрын
Thank you so much Bro and I like you so much bro and Your individuality is seen through everyday and every videos I will become a data scientist one day ...
@adipurnomo56833 жыл бұрын
Clear explained. Very recommended
@gianlucalepiscopia3123 Жыл бұрын
soon the best teacher out there
@saisharadhashivakumar10042 жыл бұрын
Hi sir without you i would have not understood deeplearning this much thank you so much
@deleolukoya46343 жыл бұрын
Wow! God bless you for all the efforts u put together to make this known to us, in fact, you're passionate and affectionate about us ur students. More strength and grace unto you. From Nigeria
@mathrisk4 жыл бұрын
I am a beginner in the subject and your video gave pretty good idea about the topic. Thanks
@hafimaoubarry69672 жыл бұрын
Very Informative video. GOD bless you
@tomthomas14312 жыл бұрын
very good explanation....easy to understand
@sciWithSaj4 жыл бұрын
Clearly explained Nice work sir
@brown_bread3 жыл бұрын
at 2:55, b is not equal to c, which is consider here as 'slope'. b is bias term which is not added to the feature vector when dealing with SVM.
@arjundev49084 жыл бұрын
@Krish Naik.. My respect towards you and your work has increased by multiple folds.. You are godsend..my saviour :) Thanks for your contribution!! ❤
@ranabhavesh11912 жыл бұрын
Awesome video everything got clear.. 🙏🙏
@faizalmakhrus8645 Жыл бұрын
Superb! This explanation is difficult to find in youtube.
@tukaramugile5732 жыл бұрын
Very good explanation. Thanks
@malathiavinash9843 жыл бұрын
This video is so good. Thank you Krish!!!
@emmanuelibrahim64272 жыл бұрын
Excellent delivery!
@lokeshrathi55004 жыл бұрын
The b value that you plot on the graph should be at the y-axis, right? Time: -@8:57 please have a check and let me know. Thanks
@KrishnaMishra-fl6pu3 жыл бұрын
It's x intercept and hence that is correct
@hessamjamalkhah97813 жыл бұрын
Thank you dear Krish, you nailed SVM for me, I totally understood the concept behind it, and I really appreciate that. wish you all the best
@geetanshkalra89803 жыл бұрын
Thank you sir! Your video has helped me to get an internship to a very good company! ❤️🔥💥 Please continue the same work! It really helps us! Thank you!🔥🔥🔥
@cryptogaming80263 жыл бұрын
Please Check that whether w1 is the slope of the hyper plane because if we consider the equation w1x1+w2x2+b=0, then the slope comes out to be -w2/w1 , so tehnically in the example you explained you took y axis to be your hyperplane
@jevoncharles86802 жыл бұрын
You are a GENIUS!
@waqarsarwar70124 жыл бұрын
i appreciate your effort and your way of teaching
@tramytran1992 Жыл бұрын
thank u so muchh for ur teaching. wish the best things to you
@dipanwitamanna95404 жыл бұрын
Beautifully and easily explained. Helpful
@vijethrai27474 жыл бұрын
18:49 C value is not how many errors to consider. It is quite the opposite. If C value increases, then tendency to make mistakes decreases because loss will increase with increase in errors. And vice versa. Greater C will create overfit and lesser C will create underfit.
@debahutimishra33484 жыл бұрын
This is an awesome video on mathematics behind Linear SVM... Too Good...Keep it up
@rvg2964 жыл бұрын
Krish, I guess you missed the squaring of |W|. Basically, maximizing (2/|W|) or (1/|W|) is essentially the same. This means we have to minimize |W|. Just for math convenience, we will write it as (1/2) (|W|)^2 because differentiating this w.rt to W will lead us to obtain |W|.
@muzamilhussain99084 жыл бұрын
And distance X2-x1 was not the indicated distance ...that is just horizontal distance for that distance we have to use pathogoras
@AmitSharma-rj2rp4 жыл бұрын
@@muzamilhussain9908 yes I was wondering the same thing - how do you get horizontal distance from subtraction?
@expertreviews11122 жыл бұрын
very nice and lot of effort put in to explain... complex topic but really nicely explained...
@vishaljhaveri75653 жыл бұрын
Thank you, Krish Sir.
@devasheeshvaid90574 жыл бұрын
At 3:00 how is 'WT'=[-1, 0] . W extends from origin and is perpendicular to 'X'(i.e. the hyperplane) So, shouldn't 'WT' be [-1,-1] (or any other point perpendicular to 'X' i.e. the hyperplane) ?
@ChiTamNguyen-d9d Жыл бұрын
I agree, this should be [-1,-1]
@hemangdhanani94343 жыл бұрын
understand completely... thanks for simplifying
@ProgrammingCradle4 жыл бұрын
Beautifully explained... Thank you Krish :)
@sandipansarkar92114 жыл бұрын
Great explanation Krish.Thanks
@mangaenfrancais9344 жыл бұрын
Your are the best
@optimusVideo Жыл бұрын
You tought good i understood Quickly.
@ManishKumar-qs1fm4 жыл бұрын
In one words awesome
@sridharmakkapati65864 жыл бұрын
Thanks for knowledge sharing
@vigneshvicky67203 жыл бұрын
Nice nice very nice❤
@vijayalakshmi39683 жыл бұрын
thank u much sir.... ur videos are more helpful for my course......well explained!
@rengarajanraman86083 жыл бұрын
Thanks for putting lot of efforts in explaining such complex concepts.
@davidlee42934 жыл бұрын
super good... you explain it so very well. thank you
@datakube30534 жыл бұрын
respect to u r work n efforts
@carearayam4 жыл бұрын
Excellent explanation, my friend
@eeshdeepsingh70304 жыл бұрын
Thanks alot for this video sir❤️ waiting for svm part 3 sir!!!
@SAN-te3rp4 жыл бұрын
I ve been searching who teach maths like this finally found 🙏🙏
@codeforcoders694 жыл бұрын
Me too
@LanteLuthuli2 жыл бұрын
Legendary ... New subscriber!
@venirajan27723 жыл бұрын
Thank you so much for awesome explanation.keep on doing sir.
@williamblanzeisky25244 жыл бұрын
dude u r so fun to watch and u make SVM so much easier lol
@RO_BOMan Жыл бұрын
No words for this great work. Thanks you very much For making these concepts very very easy to understand. I would suggest you to arrange all of them in a particular order or give some sequential number for these videos, so that I will be easy to go through all the topic without any deviation. Thanks again, keep uploading more videos on different topics.
@mohamedgaal5340 Жыл бұрын
Thanks bro. You really did your best to simplify things. I truly appreciate it.
@akshaybagal22083 жыл бұрын
great stuff and nice explanation!!!
@shivadumnawar77414 жыл бұрын
Great tutorial. Thank you so much sir.
@ratulghosh38494 жыл бұрын
Thanks for the explanation of intuition behind SVM.
@midhunskani4 жыл бұрын
I was for this video. Thanks again. Liked and subbed
@fancy49264 жыл бұрын
at around 5:30, take point (4,4) for example, if the b=1, the product should be [-1, 1]T [4, 4] = A 2*2 matrix. then we cant say it is a positive or negative value? is there any error during writing the equation?
@SaulWilliamss4 жыл бұрын
These videos are incredibly helpful, thank your very much for sharing your knowledge with us!
@hemantdas95464 жыл бұрын
Sir we need SVM regression
@AbhishekKumar-gu3ny3 жыл бұрын
I have a doubt, at 10.43, Isn't the maximum distance between margins be X1 + X2? Thank you for this simplified lecture on SVM.
@K-mk6pc2 жыл бұрын
Sir can you disclose how to learn these concepts, what reference materials You usually use so that we can start to learn in your way.
@mwaurades2 жыл бұрын
I really appreciate the tutoring Sir , keep up the good work !!!
@Neuraldata4 жыл бұрын
You are really an inspiration to many 👌
@JaydeepSinghTindori4 жыл бұрын
Thanks for your good explanation on SVM. But I have a confusion that when you changed the representation at 20:29 from max(2/||w||) from algebraic representation to to calculus representation min(||w||/2), I think it should be min(||w||^2)/2 as the derivative will give you min(||w||) as we want to maximize the (1/||w||) or minimize (||w||).
@krishnaik064 жыл бұрын
Yes u r right...missed that one
@geogeo140003 жыл бұрын
@@krishnaik06 Hello and Thank you. How do we go from ||w|| to ||w||^2 ?
@tabindabhat90784 жыл бұрын
Wow. Great video. Thanks.
@turkeshpote32394 жыл бұрын
fantastic explanation
@sarabjeetsingh50333 жыл бұрын
Hi Krish @3:32 the matrix multiplication is incorrect. Consider a matrix A of order m X n ie it has m rows and n columns Consider a matrix B of order n X p ie it has n rows and p columns When you do matrix multiplication of A and B, the resultant matrix should of order m X p While in your case you multiplied a matrix of order 2 x 1 with 1 x 2, so the resultant matrix should be of order 2 x 2 but you have 1 x 1 order. Also, what you have written looks more like a determinant of a matrix ( integer value) than a matrix. Could you check on this?