Soft Margin SVM : Data Science Concepts

  Рет қаралды 46,908

ritvikmath

ritvikmath

Күн бұрын

SVM for REAL data.
SVM Intuition Video: • Support Vector Machine...
Hard-Margin SVM Video: • SVM (The Math) : Data ...
Hinge Loss Video: • Loss Functions : Data ...

Пікірлер: 107
@paulbrown5839
@paulbrown5839 3 жыл бұрын
This guy deserves to be paid for this stuff. It's brilliant.
@ritvikmath
@ritvikmath 3 жыл бұрын
Haha, glad you think so!
@nitishnitish9172
@nitishnitish9172 Жыл бұрын
Absolutely, I have the same thing in my mind
@blairt8101
@blairt8101 2 күн бұрын
you saved my life, I will watch all your videos before my exam on machine learning
@caiocfp
@caiocfp 3 жыл бұрын
You are a great teacher, hope this channel thrives!
@ritvikmath
@ritvikmath 3 жыл бұрын
I hope so too!
@maurosobreira8695
@maurosobreira8695 2 жыл бұрын
Third video on SVM from this guy and I'm now a subscriber. Best explanation so far, and I watched a bunch before getting to these videos! Two thumbs up!
@xxshogunflames
@xxshogunflames 3 жыл бұрын
Awesome video, thank you for clarifying these topics for us. The format is pristine and I get a lot from the different ways you present information because by the second or third video I have a good foundation for the tougher parts to chew. Again, thank you!
@random_uploads97
@random_uploads97 2 жыл бұрын
Loved both hard margin and soft margin videos, everything is clear in 25 minutes collectively. Thanks a lot Ritvik! May your channel thrive more, will share a word for you.
@santiagolicea3814
@santiagolicea3814 7 ай бұрын
You're the absolute best at explaining complex things in such an easy way, it's even relaxing
@akshiwakoti7851
@akshiwakoti7851 2 жыл бұрын
Thanks for making SVM easy. You’re a great communicator.
@rohit2761
@rohit2761 2 жыл бұрын
What an amazing video. Absolutely Gold. Please make more videos. Never stop making one
@ashhabkhan
@ashhabkhan 2 жыл бұрын
explaining complex concepts in a simple manner. That is how these topics must be taught. Wow!
@harshithg5455
@harshithg5455 3 жыл бұрын
Came here after Andrew Ng s videos. Found yours to be way more intuitive. Brilliant
@jackli8603
@jackli8603 Жыл бұрын
Thank you so much!!!! You are a life saver!!! I had been troubled by the soft margin svm for a week until your video explained to me very clearly. What I didn't understand was the lambda part but now I do!!! THANKSSSSSSSSSSSSSSSSS
@johnmosugu
@johnmosugu Жыл бұрын
Thank you very much, Ritvik, for simplifying this topic and even ML. God bless you more and more
@stanlukash33
@stanlukash33 3 жыл бұрын
You deserve more subs and likes. Thank you for this!
@ritvikmath
@ritvikmath 3 жыл бұрын
I appreciate that!
@adithyagiri7933
@adithyagiri7933 2 жыл бұрын
great job man...keep bringing us these kinds of amazing stuff
@Pazurrr1501
@Pazurrr1501 2 жыл бұрын
This videos are real hidden gems. And they deserve to be not hidden any more..
@giovannibianco5996
@giovannibianco5996 2 ай бұрын
Definitely best video about svm I' ve found online; better than my university lectures (sadly). Great job!
@mikeyu6347
@mikeyu6347 8 ай бұрын
best teacher, very articulate. looking forward to more videos
@Rohit-fr2ky
@Rohit-fr2ky Жыл бұрын
Thanks a lot, i mightn't be able to understand SVM,without this..
@bytesizedbraincog
@bytesizedbraincog Жыл бұрын
You are gem to the data science community!
@ledinhanhtan
@ledinhanhtan 6 ай бұрын
Brilliant explanation! Thank you!
@RiteshSingh-ru1sk
@RiteshSingh-ru1sk 3 жыл бұрын
Gem of lectures!
@58_hananirfan45
@58_hananirfan45 Жыл бұрын
This man has single handedly saved my life.
@yaadwinder300
@yaadwinder300 2 жыл бұрын
the search to find a good youtube video on SVM has finally ended, gotta watch other topics too.
@thankyouthankyou1172
@thankyouthankyou1172 7 ай бұрын
this teacher deserves Nobel price!
@huyvuquang2041
@huyvuquang2041 11 ай бұрын
Thanks so much for your amazing works. Keep it up.
@axadify
@axadify 2 жыл бұрын
Such a brilliant explanation!
@cyanider069
@cyanider069 8 ай бұрын
You are really good at this man
@josephgill8674
@josephgill8674 3 жыл бұрын
Thank you from an MSC Data Science student at Exeter University in exam season
@nukagvilia5215
@nukagvilia5215 2 жыл бұрын
Your videos are the best!
@Greatasfather
@Greatasfather 2 жыл бұрын
I love this. Thank you so much. Helped me a lot
@xintang7741
@xintang7741 6 ай бұрын
Well explained! Very helpful!
@aashishprasad9491
@aashishprasad9491 3 жыл бұрын
you are a great teacher, I dont know why youtube doesnt reccomend your videos. Also please try some social media marketing.
@danalex2991
@danalex2991 2 жыл бұрын
AMAAZING VIDEOO ! You are so awesome.
@chunqingshi2726
@chunqingshi2726 Жыл бұрын
cystal clear, thanks a lot
@ahmetcihan8025
@ahmetcihan8025 3 жыл бұрын
Just perfect mate
@mv829
@mv829 2 жыл бұрын
thank you for this video, very helpful!
@stevenconradellis
@stevenconradellis Жыл бұрын
These explanations are so brilliantly and intuitively given, making daunting-looking equations and concepts understandable. Thank you @ritvikmath, you are truly a gift to data science.
@maheshsonawane8737
@maheshsonawane8737 9 ай бұрын
🌟MAgnificient🌟Very nice Thanks helps in interview questions.
@FEchtyy
@FEchtyy 2 жыл бұрын
Great explanation!
@venkat5230
@venkat5230 3 жыл бұрын
Wow great lecture clear explanation...tq rit
@e555t66
@e555t66 Жыл бұрын
Really explained well. If you want to get the theoretical concepts one could try doing the MIT micromasters. It’s rigorous and demands 10 to 15 hours a week.
@xiaoranlin8918
@xiaoranlin8918 2 жыл бұрын
Great clarification video
@gdivadnosdivad6185
@gdivadnosdivad6185 7 ай бұрын
You are the best! Please consider teaching at a university!
@houyao2147
@houyao2147 3 жыл бұрын
Perfect!
@vantongerent
@vantongerent 2 жыл бұрын
So good.
@MrGhost-do1rw
@MrGhost-do1rw Жыл бұрын
I came here to understand lambda and I am not disappointed. Thank you.
@ritvikmath
@ritvikmath Жыл бұрын
Of course!
@aminr23
@aminr23 2 ай бұрын
greatest teacher ever
@ritvikmath
@ritvikmath 2 ай бұрын
wow thanks!
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
Awesome.
@xt.7933
@xt.7933 3 ай бұрын
This is clearly explained!! Love your teaching. One question here, how do you choose lamda? What is the impact of a higher or lower lamda?
@jaivratsingh9966
@jaivratsingh9966 9 ай бұрын
Excellent
@moatasem444
@moatasem444 Жыл бұрын
شرح رائع ❤❤❤
@kankersan1466
@kankersan1466 3 жыл бұрын
underrated chanel
@ritvikmath
@ritvikmath 3 жыл бұрын
Hopefully not for long :D
@user-ug8uy2cv3s
@user-ug8uy2cv3s Жыл бұрын
great explanation thank you
@honeyBadger582
@honeyBadger582 3 жыл бұрын
Great video! I have a question. The optimisation formula for soft-margin svm that I usually see in textbooks is : min ||w|| + c * sum over theta. How does the equation in your videos relate to this one? Is it pretty much the same just with different symbols or is it different? Thanks!
@gabeguo6222
@gabeguo6222 Жыл бұрын
GOAT!
@adilmuhammad6078
@adilmuhammad6078 Жыл бұрын
Very nice!!!
@ritvikmath
@ritvikmath Жыл бұрын
Thank you! Cheers!
@arundas7760
@arundas7760 3 жыл бұрын
Very good, thanks
@sergecliverkana4694
@sergecliverkana4694 3 жыл бұрын
Awesome Thank you very much
@ritvikmath
@ritvikmath 3 жыл бұрын
You are very welcome
@504036465
@504036465 2 жыл бұрын
Nice video..Thank you..
@yifanzhao9942
@yifanzhao9942 3 жыл бұрын
Shoutout to my previous TA!! Also do you mind uploading pictures of whiteboard only for future videos, as it might be easier for us to check notes? Thank you!
@ritvikmath
@ritvikmath 3 жыл бұрын
Hi Yifan! Hope you're doing well. Yes for the newer videos I am remembering to show the final whiteboard only
@lilianaaa98
@lilianaaa98 Ай бұрын
thanks a lot !
@xviktorxx
@xviktorxx 3 жыл бұрын
Great videos, will you be also talking about kernel trick?
@ritvikmath
@ritvikmath 3 жыл бұрын
Yes I will! It is on the agenda
@shriqam
@shriqam 2 жыл бұрын
Hi Ritvikmath, Many thanks for the wonderful, I really love the simple notations you have used for the equations which make them very easy to understand. Can you suggest any books/courses that follow the similar notation to yours or can you please provide the source which helped you in creating these contents..? Thanks in Advance
@loveen3186
@loveen3186 Жыл бұрын
amazing
@ritvikmath
@ritvikmath Жыл бұрын
Thank you! Cheers!
@amankushwaha8927
@amankushwaha8927 2 жыл бұрын
Thanks
@caseyglick5957
@caseyglick5957 3 жыл бұрын
Your board work is great! Why are you using an L2 loss for w, rather than L1 based on what showed up in the previous video?
@vldanl
@vldanl 2 жыл бұрын
I guess that it's because L2 loss is much easier to derive, rather than L1. And also L1 is not differentiable if w=0
@caseyglick5957
@caseyglick5957 2 жыл бұрын
Thanks! Having smooth derivatives does help a lot.
@DerIntergalaktische
@DerIntergalaktische 2 жыл бұрын
@@vldanl Isn't the Hinge loss part already pretty hard to derive? Compared to ||w||?
@vantongerent
@vantongerent 2 жыл бұрын
How do you choose your support vectors, if they are no longer the closest vector to the decision boundary? Does the value of "1" get generated automatically when you plug the values of X and Y in? Or is there some scaling that takes place to set one of the vectors value to "1"?
@DerIntergalaktische
@DerIntergalaktische 2 жыл бұрын
The margin is taken into account twice in a weird way. The obvious one is the lambda ||w||. But the hingeloss has the margin as a unit of measurement. So if a datapoint is at distance five from the support vector, the hinge loss can drastically change depending on the size of the margin. Is this double accounting of the margin intended? Should there be a normalization for this? I believe deviding the hinge loss by ||w|| should work.
@dawitabdisa7262
@dawitabdisa7262 Жыл бұрын
hello, thank you for tutorials . how to apply SVM model to classify an alpha data, to realize the detection of driver’s sleepless? very looking forward for your reply.
@codeschool3964
@codeschool3964 2 ай бұрын
Explained 3 hours lecture in less than 1 hour
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
What if you made observations based upon latent variables? Could that remove the need for parameter lambda for a prior?
@mashakozlovtseva4378
@mashakozlovtseva4378 3 жыл бұрын
Very detailed explanation! I'd like to know, how are we hoing to find w and b params? Using gradient descent or another technique?
@stanlukash33
@stanlukash33 3 жыл бұрын
I had the same question
@COMIRecords
@COMIRecords 2 жыл бұрын
I think you can find optimal params in 2 ways: the first one consists in minimizing with respect to w and b the primal formulation problem, and the second one consists in maximizing with respect to a certain alpha (which is a Lagrange multiplier) the dual formulation of the problem. In the second case, once you have computed the optimal alpha, you can replace it in the equation of w (written in function of alpha) and you will find the optimal w. In order to find the best b you have to rearrange some conditions, but i am not sure about that.
@eltonlobo8697
@eltonlobo8697 2 жыл бұрын
Can use Gradient descent and update weights and bias for every example like shown in this video: kzbin.info/www/bejne/i4mTl2x4g6eWqbs
@achams123
@achams123 3 жыл бұрын
what was Vapnik on when he invented this?
@matthewcarnahan1
@matthewcarnahan1 2 ай бұрын
The margin for a hard margin SVM is pretty intuitive. But not with soft margin SVM. With hard margin, it's a rule that both margin lines must lie on at least one of their respective points. I think with soft margin, there's a rule that for any value of lambda, at least one of the margin lines must lie on at least one of their respective points, but it's not mandatory that both do. Do you concur?
@tule3835
@tule3835 7 ай бұрын
Question about Lambda: Does that mean when Lambda is LARGE -> We care more about MisClassfication Error. When Lambda is SMALL, we care about Minimize the Weight Vector and Maximize the Margin ???
@user-ws8jm8uq4c
@user-ws8jm8uq4c Жыл бұрын
how can we still have some data between margins even after rescaling w vector so that min |w^Tx + b| = 1? doesnt it mean we find the closest possbile data points to the hyperplane and rescale the w vectors so that the distrant from closest data points to the hyperplane falls into 1? this way, there shouldn't be any plots between margins... could u help correct ?
@Cerivitus
@Cerivitus 2 жыл бұрын
Why are we minimizing ||w|| to the power of 2 for soft SVM but only ||w|| for hard SVM?
@PF-vn4qz
@PF-vn4qz 2 жыл бұрын
so can we mathematically solve for the vector w and the value b the soft-margin svm optimisation problem? and if so can anyone point where to read up on this?
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 жыл бұрын
Where does the kernel come in?
@muralikrishna2691
@muralikrishna2691 Жыл бұрын
Is hinge loss is differentiable
@Ranshin077
@Ranshin077 3 жыл бұрын
I love your board work, but you should really have an image of the board without you in it, or just delay your walk into the picture after a second or two at the beginning so I can snag shot for my notes a bit easier, lol.
@ritvikmath
@ritvikmath 3 жыл бұрын
Noted! I'm starting to remember this for my new videos. Thanks!
@redherring0077
@redherring0077 2 жыл бұрын
Haha. I have dedicated a whole hard disk for ritvik’s data science videos. I just hope he is going to write a book or even better do an end to end data science course on coursera😍😍
@juanguang5633
@juanguang5633 Жыл бұрын
could be nicer if you talk about slack variables
@sushantpatil2566
@sushantpatil2566 10 ай бұрын
EUREKA!
@junli1865
@junli1865 2 жыл бұрын
Thank you !
@philosopher_sage_07
@philosopher_sage_07 9 ай бұрын
@paulhetherington3854
@paulhetherington3854 2 ай бұрын
/p''P' 2'z mrjn hlf txt ~arch tmp XP < VIN 58#/ /mrjn djz bx 2'Cn'' < avn cg kntrl ~ ferris R''/ /smltz 2'Cn'' wth abv mrjn djz bx ~arch tmp/ /r'' intr sktz of visocity ++ mrjn hlf txt vrchal/ /XP(cR'' mrjn VIN 58# ~ rchz ferris avn cg kntr cntr LN'' hlf txt ++ symbol/
SVM (The Math) : Data Science Concepts
10:19
ritvikmath
Рет қаралды 93 М.
SVM Dual : Data Science Concepts
15:32
ritvikmath
Рет қаралды 44 М.
When someone reclines their seat ✈️
00:21
Adam W
Рет қаралды 27 МЛН
Just try to use a cool gadget 😍
00:33
123 GO! SHORTS
Рет қаралды 66 МЛН
Loss Functions : Data Science Basics
16:40
ritvikmath
Рет қаралды 31 М.
Support Vector Machines: All you need to know!
14:58
Intuitive Machine Learning
Рет қаралды 132 М.
Backpropagation : Data Science Concepts
19:29
ritvikmath
Рет қаралды 33 М.
Support Vector Machines (SVM) - the basics | simply explained
28:44
All Learning Algorithms Explained in 14 Minutes
14:10
CinemaGuess
Рет қаралды 174 М.
SVM Kernels : Data Science Concepts
12:02
ritvikmath
Рет қаралды 67 М.
Gradient Boosting : Data Science's Silver Bullet
15:48
ritvikmath
Рет қаралды 54 М.
Machine Learning Tutorial Python - 10  Support Vector Machine (SVM)
23:22
Entropy (for data science) Clearly Explained!!!
16:35
StatQuest with Josh Starmer
Рет қаралды 576 М.
Hinge Loss for Binary Classifiers
12:01
Barry Van Veen
Рет қаралды 16 М.
When someone reclines their seat ✈️
00:21
Adam W
Рет қаралды 27 МЛН