This guy deserves to be paid for this stuff. It's brilliant.
@ritvikmath3 жыл бұрын
Haha, glad you think so!
@nitishnitish9172 Жыл бұрын
Absolutely, I have the same thing in my mind
@caiocfp3 жыл бұрын
You are a great teacher, hope this channel thrives!
@ritvikmath3 жыл бұрын
I hope so too!
@peterlinhan20 күн бұрын
Your teaching is way better than lots of lecturers in lots of famous University.
@maurosobreira86952 жыл бұрын
Third video on SVM from this guy and I'm now a subscriber. Best explanation so far, and I watched a bunch before getting to these videos! Two thumbs up!
@harshithg54553 жыл бұрын
Came here after Andrew Ng s videos. Found yours to be way more intuitive. Brilliant
@DivyamanRawatАй бұрын
Its great to be in times like these. Wonderful learning resources available on the internet for free. Some of my favourite learning resources: - 1) 3Blue1Brown 2) MIT Courses and the latest entrant: - 3) Ritvik Math Thanks for posting these videos!
@ian19552 ай бұрын
I can't believe how good of an explanation this is. Great job! Keep it up!
@akshiwakoti78512 жыл бұрын
Thanks for making SVM easy. You’re a great communicator.
@giovannibianco59965 ай бұрын
Definitely best video about svm I' ve found online; better than my university lectures (sadly). Great job!
@stanlukash333 жыл бұрын
You deserve more subs and likes. Thank you for this!
@ritvikmath3 жыл бұрын
I appreciate that!
@Pazurrr15012 жыл бұрын
This videos are real hidden gems. And they deserve to be not hidden any more..
@jackli86032 жыл бұрын
Thank you so much!!!! You are a life saver!!! I had been troubled by the soft margin svm for a week until your video explained to me very clearly. What I didn't understand was the lambda part but now I do!!! THANKSSSSSSSSSSSSSSSSS
@yaadwinder3002 жыл бұрын
the search to find a good youtube video on SVM has finally ended, gotta watch other topics too.
@MrGhost-do1rw Жыл бұрын
I came here to understand lambda and I am not disappointed. Thank you.
@ritvikmath Жыл бұрын
Of course!
@ashhabkhan2 жыл бұрын
explaining complex concepts in a simple manner. That is how these topics must be taught. Wow!
@josephgill86743 жыл бұрын
Thank you from an MSC Data Science student at Exeter University in exam season
@sahilbhagat48918 күн бұрын
You are great teacher. Respect!
@rohit27612 жыл бұрын
What an amazing video. Absolutely Gold. Please make more videos. Never stop making one
@bytesizedbraincog Жыл бұрын
You are gem to the data science community!
@johnmosugu Жыл бұрын
Thank you very much, Ritvik, for simplifying this topic and even ML. God bless you more and more
@mikeyu634711 ай бұрын
best teacher, very articulate. looking forward to more videos
@maheshsonawane8737 Жыл бұрын
🌟MAgnificient🌟Very nice Thanks helps in interview questions.
@gdivadnosdivad618510 ай бұрын
You are the best! Please consider teaching at a university!
@kankersan14663 жыл бұрын
underrated chanel
@ritvikmath3 жыл бұрын
Hopefully not for long :D
@aashishprasad94913 жыл бұрын
you are a great teacher, I dont know why youtube doesnt reccomend your videos. Also please try some social media marketing.
@aminr236 ай бұрын
greatest teacher ever
@ritvikmath6 ай бұрын
wow thanks!
@ledinhanhtan9 ай бұрын
Brilliant explanation! Thank you!
@xviktorxx3 жыл бұрын
Great videos, will you be also talking about kernel trick?
@ritvikmath3 жыл бұрын
Yes I will! It is on the agenda
@xt.79337 ай бұрын
This is clearly explained!! Love your teaching. One question here, how do you choose lamda? What is the impact of a higher or lower lamda?
@RiteshSingh-ru1sk3 жыл бұрын
Gem of lectures!
@caseyglick59573 жыл бұрын
Your board work is great! Why are you using an L2 loss for w, rather than L1 based on what showed up in the previous video?
@vldanl3 жыл бұрын
I guess that it's because L2 loss is much easier to derive, rather than L1. And also L1 is not differentiable if w=0
@caseyglick59573 жыл бұрын
Thanks! Having smooth derivatives does help a lot.
@DerIntergalaktische2 жыл бұрын
@@vldanl Isn't the Hinge loss part already pretty hard to derive? Compared to ||w||?
@adithyagiri79333 жыл бұрын
great job man...keep bringing us these kinds of amazing stuff
@xintang77419 ай бұрын
Well explained! Very helpful!
@nukagvilia52152 жыл бұрын
Your videos are the best!
@yifanzhao99423 жыл бұрын
Shoutout to my previous TA!! Also do you mind uploading pictures of whiteboard only for future videos, as it might be easier for us to check notes? Thank you!
@ritvikmath3 жыл бұрын
Hi Yifan! Hope you're doing well. Yes for the newer videos I am remembering to show the final whiteboard only
@danalex29912 жыл бұрын
AMAAZING VIDEOO ! You are so awesome.
@vantongerent2 жыл бұрын
So good.
@bztomato31312 ай бұрын
I have a clear vision about svm now, thanks a lot, appreciate you, won't you talk about how to minimize those things?
@chunqingshi27262 жыл бұрын
cystal clear, thanks a lot
@e555t66 Жыл бұрын
Really explained well. If you want to get the theoretical concepts one could try doing the MIT micromasters. It’s rigorous and demands 10 to 15 hours a week.
@huyvuquang2041 Жыл бұрын
Thanks so much for your amazing works. Keep it up.
@axadify3 жыл бұрын
Such a brilliant explanation!
@xiaoranlin89182 жыл бұрын
Great clarification video
@jaivratsingh9966 Жыл бұрын
Excellent
@moatasem444 Жыл бұрын
شرح رائع ❤❤❤
@Cerivitus2 жыл бұрын
Why are we minimizing ||w|| to the power of 2 for soft SVM but only ||w|| for hard SVM?
@Greatasfather3 жыл бұрын
I love this. Thank you so much. Helped me a lot
@MEDDAHKhaledFouad Жыл бұрын
great explanation thank you
@tule383511 ай бұрын
Question about Lambda: Does that mean when Lambda is LARGE -> We care more about MisClassfication Error. When Lambda is SMALL, we care about Minimize the Weight Vector and Maximize the Margin ???
@gabeguo6222 Жыл бұрын
GOAT!
@mv8293 жыл бұрын
thank you for this video, very helpful!
@codeschool39645 ай бұрын
Explained 3 hours lecture in less than 1 hour
@user-wr4yl7tx3w3 жыл бұрын
Awesome.
@5040364653 жыл бұрын
Nice video..Thank you..
@ahmetcihan80253 жыл бұрын
Just perfect mate
@houyao21473 жыл бұрын
Perfect!
@FEchtyy3 жыл бұрын
Great explanation!
@i-FaizanulHaq4 күн бұрын
please reduce the number of ads PLEASE
@matthewcarnahan15 ай бұрын
The margin for a hard margin SVM is pretty intuitive. But not with soft margin SVM. With hard margin, it's a rule that both margin lines must lie on at least one of their respective points. I think with soft margin, there's a rule that for any value of lambda, at least one of the margin lines must lie on at least one of their respective points, but it's not mandatory that both do. Do you concur?
@random_uploads972 жыл бұрын
Loved both hard margin and soft margin videos, everything is clear in 25 minutes collectively. Thanks a lot Ritvik! May your channel thrive more, will share a word for you.
@adilmuhammad6078 Жыл бұрын
Very nice!!!
@ritvikmath Жыл бұрын
Thank you! Cheers!
@achams1233 жыл бұрын
what was Vapnik on when he invented this?
@lilianaaa984 ай бұрын
thanks a lot !
@amankushwaha89273 жыл бұрын
Thanks
@arundas77603 жыл бұрын
Very good, thanks
@dawitabdisa7262 Жыл бұрын
hello, thank you for tutorials . how to apply SVM model to classify an alpha data, to realize the detection of driver’s sleepless? very looking forward for your reply.
@user-wr4yl7tx3w3 жыл бұрын
Where does the kernel come in?
@honeyBadger5823 жыл бұрын
Great video! I have a question. The optimisation formula for soft-margin svm that I usually see in textbooks is : min ||w|| + c * sum over theta. How does the equation in your videos relate to this one? Is it pretty much the same just with different symbols or is it different? Thanks!
@mashakozlovtseva43783 жыл бұрын
Very detailed explanation! I'd like to know, how are we hoing to find w and b params? Using gradient descent or another technique?
@stanlukash333 жыл бұрын
I had the same question
@COMIRecords3 жыл бұрын
I think you can find optimal params in 2 ways: the first one consists in minimizing with respect to w and b the primal formulation problem, and the second one consists in maximizing with respect to a certain alpha (which is a Lagrange multiplier) the dual formulation of the problem. In the second case, once you have computed the optimal alpha, you can replace it in the equation of w (written in function of alpha) and you will find the optimal w. In order to find the best b you have to rearrange some conditions, but i am not sure about that.
@eltonlobo86972 жыл бұрын
Can use Gradient descent and update weights and bias for every example like shown in this video: kzbin.info/www/bejne/i4mTl2x4g6eWqbs
@shriqam2 жыл бұрын
Hi Ritvikmath, Many thanks for the wonderful, I really love the simple notations you have used for the equations which make them very easy to understand. Can you suggest any books/courses that follow the similar notation to yours or can you please provide the source which helped you in creating these contents..? Thanks in Advance
@송진-l8j2 жыл бұрын
how can we still have some data between margins even after rescaling w vector so that min |w^Tx + b| = 1? doesnt it mean we find the closest possbile data points to the hyperplane and rescale the w vectors so that the distrant from closest data points to the hyperplane falls into 1? this way, there shouldn't be any plots between margins... could u help correct ?
@DerIntergalaktische2 жыл бұрын
The margin is taken into account twice in a weird way. The obvious one is the lambda ||w||. But the hingeloss has the margin as a unit of measurement. So if a datapoint is at distance five from the support vector, the hinge loss can drastically change depending on the size of the margin. Is this double accounting of the margin intended? Should there be a normalization for this? I believe deviding the hinge loss by ||w|| should work.
@muralikrishna26912 жыл бұрын
Is hinge loss is differentiable
@juanguang5633 Жыл бұрын
could be nicer if you talk about slack variables
@PF-vn4qz3 жыл бұрын
so can we mathematically solve for the vector w and the value b the soft-margin svm optimisation problem? and if so can anyone point where to read up on this?
@Ranshin0773 жыл бұрын
I love your board work, but you should really have an image of the board without you in it, or just delay your walk into the picture after a second or two at the beginning so I can snag shot for my notes a bit easier, lol.
@ritvikmath3 жыл бұрын
Noted! I'm starting to remember this for my new videos. Thanks!
@redherring00772 жыл бұрын
Haha. I have dedicated a whole hard disk for ritvik’s data science videos. I just hope he is going to write a book or even better do an end to end data science course on coursera😍😍
@user-wr4yl7tx3w3 жыл бұрын
What if you made observations based upon latent variables? Could that remove the need for parameter lambda for a prior?
@vantongerent2 жыл бұрын
How do you choose your support vectors, if they are no longer the closest vector to the decision boundary? Does the value of "1" get generated automatically when you plug the values of X and Y in? Or is there some scaling that takes place to set one of the vectors value to "1"?
Awesome video, thank you for clarifying these topics for us. The format is pristine and I get a lot from the different ways you present information because by the second or third video I have a good foundation for the tougher parts to chew. Again, thank you!
@santiagolicea381410 ай бұрын
You're the absolute best at explaining complex things in such an easy way, it's even relaxing
@thankyouthankyou117210 ай бұрын
this teacher deserves Nobel price!
@blairt81013 ай бұрын
you saved my life, I will watch all your videos before my exam on machine learning
@stevenconradellis Жыл бұрын
These explanations are so brilliantly and intuitively given, making daunting-looking equations and concepts understandable. Thank you @ritvikmath, you are truly a gift to data science.
@Rohit-fr2ky2 жыл бұрын
Thanks a lot, i mightn't be able to understand SVM,without this..