Maths Intuition Behind Support Vector Machine Part 2 | Machine Learning Data Science

  Рет қаралды 224,386

Krish Naik

Krish Naik

Күн бұрын

Пікірлер: 330
@572varuag
@572varuag 4 жыл бұрын
I'm glad someone like you decided to make a video on this. I have found that many find SVM hard to grasp because they dive directly into the code without understanding the intuition behind it. This goes a long way in helping people out.
@mahikhan5716
@mahikhan5716 3 жыл бұрын
a sensible tutor i have seen my life who always traces students pulse . there are lot of tutorials about svm in youtube but no other covered A to z as like as krish . appreciate u krish
@victor75570
@victor75570 2 жыл бұрын
I cannot begin to thank you enough for breaking down and simplifying the math behind the machine learning algorithms. Understanding the math under the hood is pertinent to tuning the hyperparameters. I love your videos and I'm always recommending aspiring data scientists to check out your channel.
@YouTubelesss
@YouTubelesss 2 жыл бұрын
man how could you remember all this... I keep forgetting the concepts after few weeks and had to watch it back to get a grasp on it. A million thanks for you in sharing your precious knowledge with us.
@myeschool2129
@myeschool2129 4 жыл бұрын
My Dear Teacher From my heart, I salute you, cause you to work so hard for us, your students, to teach the things with so much clarity. Praying for you. ....Noushad Rahim, Kerala
@abhijitbhandari621
@abhijitbhandari621 3 жыл бұрын
You really are too passionate about teaching sir. Sometimes you even are being breathless, that excitement of teaching..got no words. Hope you will upload videos on topics related to DNN as well. Proud to be learning from you
@akshaykhavare5898
@akshaykhavare5898 2 жыл бұрын
The way you simplify things is really commendable. After reading lot of blogs and going through other resources finally landed here and it was worth it. Thank you Sir.
@Reem.alhamimi
@Reem.alhamimi Ай бұрын
At 4:43, W transpose will be a row vector. W itself is a column vector and you need to take a transpose of W to make it a row vector - that it [-1 0]. x is fine as its a column vector anyways. Your calculation of -4 for y=w(transpose)x will be correct for the row vector of W - that is transpose of the column vector W
@datahat642
@datahat642 2 жыл бұрын
Very informative video and simple to understand. A slight oversight error, as the X here is 2-d [x1,x2], W (without b) must be 2-d as well [w1,w2]... If we consider bias b, then X = [x1 x2 1] and W is [w1 w2 b]... and in such a case we shall have a plane instead of a line
@nitinudgirkar
@nitinudgirkar Жыл бұрын
At 4:43, I think the W transpose will be a row vector. W itself is a column vector and you need to take a transpose of W to make it a row vector - that it [-1 0]. x is fine as its a column vector anyways. Your calculation of -4 for y=w(transpose)x will be correct for the row vector of W - that is transpose of the column vector W
@chandrimapramanick1111
@chandrimapramanick1111 Жыл бұрын
Thank you for the clearing this. While doing the math I was also confused.
@prashantkumarvishwakarma8645
@prashantkumarvishwakarma8645 3 жыл бұрын
Thanks Sir, For all of yours video agr aap nhi hote to kbhi v itna sikh nahi pate humare instute wale log sirf overview bata k chor diye but real knowledge to aapki video se mila..... Thank You So Much
@kirushikeshdb1885
@kirushikeshdb1885 3 жыл бұрын
The w matrix should be [1 1] because the line equation is x1 + x2 =0, also while computing the value of y, wT should be a dimension of 1x2 and X should be 2x1, so that you will get a single value.
@JackSparrowBoat
@JackSparrowBoat 2 жыл бұрын
Can you pls elaborate?
@prateeksingh3808
@prateeksingh3808 Жыл бұрын
Yes I feel the same
@rvg296
@rvg296 4 жыл бұрын
The Regularization Parameter C is basically how much we want to avoid misclassification of points. If C is very large (infinite) we get the perfect classification of training samples smaller margin is considered, but if C is very small(0) it will cause the optimizer to find the maximum margin classifier even though it misclassified some points. Hence we have a find a good value of C in between. The Gamma parameter defines how much influence a training example has. For example, if gamma is high only the nearest points from the margin are considered for calculating distances, but if gamma is low even farther points from the margin are also considered.
@krishnaik06
@krishnaik06 4 жыл бұрын
U r right...thanks amazing work..
@VishalPatel-cd1hq
@VishalPatel-cd1hq 4 жыл бұрын
hi Rohit Ci actually will be lagrange multiplier as here our optimiztion problem is constrain one we will generally denote lagrange multiplier as lambda(i).
@VishalPatel-cd1hq
@VishalPatel-cd1hq 4 жыл бұрын
In SVM we are trying to solve our max(min) problem but by satisfying KKT (Karush-Kuhn-Tucker) condition we are making as min(max) problem and try to optimize this problem through SMO(sequential minimal optimiztion) .
@VishalPatel-cd1hq
@VishalPatel-cd1hq 4 жыл бұрын
my bad actually there will be one more term which will be multiple with ci(w.T+b) that we will callas lagrange multiplier
@coolzkabhijit
@coolzkabhijit 4 жыл бұрын
Can you please explain what is the role of gamma? What happens if farther or closer points are considered? How gamma affects the output?
@SumanBhartismn
@SumanBhartismn 4 жыл бұрын
Abhi tk kahan the sir, I was searching a teacher like you in ML. Finally mission completed. Love from my side.
@usamaahmad7191
@usamaahmad7191 9 ай бұрын
Thank you so much Sir, the way you were teaching and i was getting all of your points, my love for your method and dedication starts hiking, a lot of love , respect and salute from Pakistan... knowledge have no boundries...
@sivareddynagireddy56
@sivareddynagireddy56 2 жыл бұрын
I saw so many articles about svms, every one say directly distance formula simply maximize,but your simplification from strach is awesome sir !!!!
@andrewwilliam2209
@andrewwilliam2209 4 жыл бұрын
Hey Krish, I just want to say that your explanations are superb. I am new to Machine Learning and I took an online course about it but it barely gets into the mathematics. I understand that to get good and serious at ML we need a solid mathematical understanding of the various models, so i appreciate these videos that go in depth. To be honest I watched it the first time and didn't completely get it, but I'm going to watch it again now!
@SAURABHKUMAR-ql8wi
@SAURABHKUMAR-ql8wi 3 жыл бұрын
It was wonderful session. I have gone first time for SVM and able to relate the mathematics very well. Thanks a lot for this session.
@nirangannirangan7175
@nirangannirangan7175 Жыл бұрын
Excellent sir.Crystal Clear explanation.
@jbhsmeta
@jbhsmeta 4 жыл бұрын
Just Great !!! Wow!!! - It was a great experience. Eagerly waiting for the part 3 of SVM covering the kernel trick.
@lisa-sf7no
@lisa-sf7no Жыл бұрын
Thanks
@vignanvennampally6852
@vignanvennampally6852 3 жыл бұрын
Watching One video of yours = reading 100 blogs. Thank you for saving our time!
@wenqichen4151
@wenqichen4151 3 жыл бұрын
I can't wait to express my infinite appreciation for you, sir! This video is so so so intuitive and uses less advanced math!
@DhruvSharma14
@DhruvSharma14 4 жыл бұрын
Salutes to you dear teach! You are one of the best for all concepts related to Data Science!
@GujratiInGulf
@GujratiInGulf Жыл бұрын
THANK YOU !!!! I can not even explain how much you helped me, I was about to cry as i was not able to understand the math behind SVM and why we use Lagrangian function. I have exam after 10 days and your videos are really helping me in this time thanks once again and HAPPY GURUPURNIMA from bottom of my heart !!!
@himtyagi9740
@himtyagi9740 3 жыл бұрын
Lots of efforts are given by you in creating this clip...You are an excellent teacher...Best wishes
@NicJd01
@NicJd01 3 жыл бұрын
Bhari dada!!! Mast samjavlat!!👌👌
@chekrasena
@chekrasena 4 жыл бұрын
Krishna Anna Thoppu🙏
@rupeshsingh4012
@rupeshsingh4012 3 жыл бұрын
Very very impressive explanation.. Thanks a lot. Bhagwan aapko hmesa khus or swasth rakhe....
@zinaibrahim
@zinaibrahim Жыл бұрын
Krish, all I can say is thank you! the best and most comprehensive SVM lecture I've seen (and I've seen many).
@darshitsolanki7352
@darshitsolanki7352 4 жыл бұрын
Kernel have sigmoid s shape graph n linear and polynomial form I m from statistics degree u have grt knowledge dude keep it up
@TriedTastedJourney
@TriedTastedJourney 3 жыл бұрын
Best video to understand the math behind SVM. Thanks a lot Sir!
@afsarullashareef3567
@afsarullashareef3567 2 жыл бұрын
Krish , I'm really thankful to you .. may God bless you .
@bhabeshmali3640
@bhabeshmali3640 2 жыл бұрын
You are a Gem Krish Naik.
@mayursalunke1654
@mayursalunke1654 3 жыл бұрын
by looing this video, I actually understand, How important math in real life! and also How logistics and support vector machines actually differ. Thank you Krish sir
@rajraji417
@rajraji417 4 жыл бұрын
Thank you so much Bro and I like you so much bro and Your individuality is seen through everyday and every videos I will become a data scientist one day ...
@adipurnomo5683
@adipurnomo5683 3 жыл бұрын
Clear explained. Very recommended
@gianlucalepiscopia3123
@gianlucalepiscopia3123 Жыл бұрын
soon the best teacher out there
@saisharadhashivakumar1004
@saisharadhashivakumar1004 2 жыл бұрын
Hi sir without you i would have not understood deeplearning this much thank you so much
@deleolukoya4634
@deleolukoya4634 3 жыл бұрын
Wow! God bless you for all the efforts u put together to make this known to us, in fact, you're passionate and affectionate about us ur students. More strength and grace unto you. From Nigeria
@mathrisk
@mathrisk 4 жыл бұрын
I am a beginner in the subject and your video gave pretty good idea about the topic. Thanks
@hafimaoubarry6967
@hafimaoubarry6967 2 жыл бұрын
Very Informative video. GOD bless you
@tomthomas1431
@tomthomas1431 2 жыл бұрын
very good explanation....easy to understand
@sciWithSaj
@sciWithSaj 4 жыл бұрын
Clearly explained Nice work sir
@brown_bread
@brown_bread 3 жыл бұрын
at 2:55, b is not equal to c, which is consider here as 'slope'. b is bias term which is not added to the feature vector when dealing with SVM.
@arjundev4908
@arjundev4908 4 жыл бұрын
@Krish Naik.. My respect towards you and your work has increased by multiple folds.. You are godsend..my saviour :) Thanks for your contribution!! ❤
@ranabhavesh1191
@ranabhavesh1191 2 жыл бұрын
Awesome video everything got clear.. 🙏🙏
@faizalmakhrus8645
@faizalmakhrus8645 Жыл бұрын
Superb! This explanation is difficult to find in youtube.
@tukaramugile573
@tukaramugile573 2 жыл бұрын
Very good explanation. Thanks
@malathiavinash984
@malathiavinash984 3 жыл бұрын
This video is so good. Thank you Krish!!!
@emmanuelibrahim6427
@emmanuelibrahim6427 2 жыл бұрын
Excellent delivery!
@lokeshrathi5500
@lokeshrathi5500 4 жыл бұрын
The b value that you plot on the graph should be at the y-axis, right? Time: -@8:57 please have a check and let me know. Thanks
@KrishnaMishra-fl6pu
@KrishnaMishra-fl6pu 3 жыл бұрын
It's x intercept and hence that is correct
@hessamjamalkhah9781
@hessamjamalkhah9781 3 жыл бұрын
Thank you dear Krish, you nailed SVM for me, I totally understood the concept behind it, and I really appreciate that. wish you all the best
@geetanshkalra8980
@geetanshkalra8980 3 жыл бұрын
Thank you sir! Your video has helped me to get an internship to a very good company! ❤️🔥💥 Please continue the same work! It really helps us! Thank you!🔥🔥🔥
@cryptogaming8026
@cryptogaming8026 3 жыл бұрын
Please Check that whether w1 is the slope of the hyper plane because if we consider the equation w1x1+w2x2+b=0, then the slope comes out to be -w2/w1 , so tehnically in the example you explained you took y axis to be your hyperplane
@jevoncharles8680
@jevoncharles8680 2 жыл бұрын
You are a GENIUS!
@waqarsarwar7012
@waqarsarwar7012 4 жыл бұрын
i appreciate your effort and your way of teaching
@tramytran1992
@tramytran1992 Жыл бұрын
thank u so muchh for ur teaching. wish the best things to you
@dipanwitamanna9540
@dipanwitamanna9540 4 жыл бұрын
Beautifully and easily explained. Helpful
@vijethrai2747
@vijethrai2747 4 жыл бұрын
18:49 C value is not how many errors to consider. It is quite the opposite. If C value increases, then tendency to make mistakes decreases because loss will increase with increase in errors. And vice versa. Greater C will create overfit and lesser C will create underfit.
@debahutimishra3348
@debahutimishra3348 4 жыл бұрын
This is an awesome video on mathematics behind Linear SVM... Too Good...Keep it up
@rvg296
@rvg296 4 жыл бұрын
Krish, I guess you missed the squaring of |W|. Basically, maximizing (2/|W|) or (1/|W|) is essentially the same. This means we have to minimize |W|. Just for math convenience, we will write it as (1/2) (|W|)^2 because differentiating this w.rt to W will lead us to obtain |W|.
@muzamilhussain9908
@muzamilhussain9908 4 жыл бұрын
And distance X2-x1 was not the indicated distance ...that is just horizontal distance for that distance we have to use pathogoras
@AmitSharma-rj2rp
@AmitSharma-rj2rp 4 жыл бұрын
@@muzamilhussain9908 yes I was wondering the same thing - how do you get horizontal distance from subtraction?
@expertreviews1112
@expertreviews1112 2 жыл бұрын
very nice and lot of effort put in to explain... complex topic but really nicely explained...
@vishaljhaveri7565
@vishaljhaveri7565 3 жыл бұрын
Thank you, Krish Sir.
@devasheeshvaid9057
@devasheeshvaid9057 4 жыл бұрын
At 3:00 how is 'WT'=[-1, 0] . W extends from origin and is perpendicular to 'X'(i.e. the hyperplane) So, shouldn't 'WT' be [-1,-1] (or any other point perpendicular to 'X' i.e. the hyperplane) ?
@ChiTamNguyen-d9d
@ChiTamNguyen-d9d Жыл бұрын
I agree, this should be [-1,-1]
@hemangdhanani9434
@hemangdhanani9434 3 жыл бұрын
understand completely... thanks for simplifying
@ProgrammingCradle
@ProgrammingCradle 4 жыл бұрын
Beautifully explained... Thank you Krish :)
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
Great explanation Krish.Thanks
@mangaenfrancais934
@mangaenfrancais934 4 жыл бұрын
Your are the best
@optimusVideo
@optimusVideo Жыл бұрын
You tought good i understood Quickly.
@ManishKumar-qs1fm
@ManishKumar-qs1fm 4 жыл бұрын
In one words awesome
@sridharmakkapati6586
@sridharmakkapati6586 4 жыл бұрын
Thanks for knowledge sharing
@vigneshvicky6720
@vigneshvicky6720 3 жыл бұрын
Nice nice very nice❤
@vijayalakshmi3968
@vijayalakshmi3968 3 жыл бұрын
thank u much sir.... ur videos are more helpful for my course......well explained!
@rengarajanraman8608
@rengarajanraman8608 3 жыл бұрын
Thanks for putting lot of efforts in explaining such complex concepts.
@davidlee4293
@davidlee4293 4 жыл бұрын
super good... you explain it so very well. thank you
@datakube3053
@datakube3053 4 жыл бұрын
respect to u r work n efforts
@carearayam
@carearayam 4 жыл бұрын
Excellent explanation, my friend
@eeshdeepsingh7030
@eeshdeepsingh7030 4 жыл бұрын
Thanks alot for this video sir❤️ waiting for svm part 3 sir!!!
@SAN-te3rp
@SAN-te3rp 4 жыл бұрын
I ve been searching who teach maths like this finally found 🙏🙏
@codeforcoders69
@codeforcoders69 4 жыл бұрын
Me too
@LanteLuthuli
@LanteLuthuli 2 жыл бұрын
Legendary ... New subscriber!
@venirajan2772
@venirajan2772 3 жыл бұрын
Thank you so much for awesome explanation.keep on doing sir.
@williamblanzeisky2524
@williamblanzeisky2524 4 жыл бұрын
dude u r so fun to watch and u make SVM so much easier lol
@RO_BOMan
@RO_BOMan Жыл бұрын
No words for this great work. Thanks you very much For making these concepts very very easy to understand. I would suggest you to arrange all of them in a particular order or give some sequential number for these videos, so that I will be easy to go through all the topic without any deviation. Thanks again, keep uploading more videos on different topics.
@mohamedgaal5340
@mohamedgaal5340 Жыл бұрын
Thanks bro. You really did your best to simplify things. I truly appreciate it.
@akshaybagal2208
@akshaybagal2208 3 жыл бұрын
great stuff and nice explanation!!!
@shivadumnawar7741
@shivadumnawar7741 4 жыл бұрын
Great tutorial. Thank you so much sir.
@ratulghosh3849
@ratulghosh3849 4 жыл бұрын
Thanks for the explanation of intuition behind SVM.
@midhunskani
@midhunskani 4 жыл бұрын
I was for this video. Thanks again. Liked and subbed
@fancy4926
@fancy4926 4 жыл бұрын
at around 5:30, take point (4,4) for example, if the b=1, the product should be [-1, 1]T [4, 4] = A 2*2 matrix. then we cant say it is a positive or negative value? is there any error during writing the equation?
@SaulWilliamss
@SaulWilliamss 4 жыл бұрын
These videos are incredibly helpful, thank your very much for sharing your knowledge with us!
@hemantdas9546
@hemantdas9546 4 жыл бұрын
Sir we need SVM regression
@AbhishekKumar-gu3ny
@AbhishekKumar-gu3ny 3 жыл бұрын
I have a doubt, at 10.43, Isn't the maximum distance between margins be X1 + X2? Thank you for this simplified lecture on SVM.
@K-mk6pc
@K-mk6pc 2 жыл бұрын
Sir can you disclose how to learn these concepts, what reference materials You usually use so that we can start to learn in your way.
@mwaurades
@mwaurades 2 жыл бұрын
I really appreciate the tutoring Sir , keep up the good work !!!
@Neuraldata
@Neuraldata 4 жыл бұрын
You are really an inspiration to many 👌
@JaydeepSinghTindori
@JaydeepSinghTindori 4 жыл бұрын
Thanks for your good explanation on SVM. But I have a confusion that when you changed the representation at 20:29 from max(2/||w||) from algebraic representation to to calculus representation min(||w||/2), I think it should be min(||w||^2)/2 as the derivative will give you min(||w||) as we want to maximize the (1/||w||) or minimize (||w||).
@krishnaik06
@krishnaik06 4 жыл бұрын
Yes u r right...missed that one
@geogeo14000
@geogeo14000 3 жыл бұрын
@@krishnaik06 Hello and Thank you. How do we go from ||w|| to ||w||^2 ?
@tabindabhat9078
@tabindabhat9078 4 жыл бұрын
Wow. Great video. Thanks.
@turkeshpote3239
@turkeshpote3239 4 жыл бұрын
fantastic explanation
@sarabjeetsingh5033
@sarabjeetsingh5033 3 жыл бұрын
Hi Krish @3:32 the matrix multiplication is incorrect. Consider a matrix A of order m X n ie it has m rows and n columns Consider a matrix B of order n X p ie it has n rows and p columns When you do matrix multiplication of A and B, the resultant matrix should of order m X p While in your case you multiplied a matrix of order 2 x 1 with 1 x 2, so the resultant matrix should be of order 2 x 2 but you have 1 x 1 order. Also, what you have written looks more like a determinant of a matrix ( integer value) than a matrix. Could you check on this?
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 53 МЛН
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
16. Learning: Support Vector Machines
49:34
MIT OpenCourseWare
Рет қаралды 2 МЛН
Support Vector Machines Part 1 (of 3): Main Ideas!!!
20:32
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Gradient Boosting In Depth Intuition- Part 1 Machine Learning
11:20
Aggvent Calendar Day 18
3:36
Andy Math
Рет қаралды 13 М.
Agentic AI Is The Future- GEN AI Trends In 2025
13:29
Krish Naik
Рет қаралды 3,3 М.
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН