Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!
@naps92495 жыл бұрын
The best Machine learning / Deep learning I've learnt from.
@johncyjoanofarc4 жыл бұрын
This video should go viral.... So that ppl benefit from it.... Great teaching
@sofiayz74723 жыл бұрын
This is the best SVM explanation! I never truly understand it until I watch your video!
@letyrodridc4 ай бұрын
Amazing explanation Luis! As usual. You are a great professor making complex topics in very simple explanations.
@tw5265-p5t3 ай бұрын
You made the SVMs look like a walk in the park. Thoroughly enjoyed this as I enjoyed your Math for ML specialisation in CoursEra.
@mudcoff5 жыл бұрын
Mr. Serano, U r the only 1, who explains the logic of ML and not the technicalities. Thank U
@hichamsabah314 жыл бұрын
Best explanation of SVM on KZbin. Keep up the good work.
@JohnTheStun5 жыл бұрын
Visual, thorough, informal - perfect!
@mohammedhasan65225 жыл бұрын
As always, very nicely and easily explained. Looking forward to seeing your explanation about PCA, TSNE and some topics of Reinforcement Learning.
@EliezerTseytkin5 жыл бұрын
Pure genius. It really takes a genius to explain these things with such extreme simplicity.
@ብሌናይጻዕዳ4 жыл бұрын
The best SVM explanation I ve listened to. Thank you.
@bodenseeboys3 жыл бұрын
I really like your accent, could listen all day. Living legend Luis
@ismailcezeri16914 жыл бұрын
The best explanation of SVM I have ever seen
@giannismaris132 жыл бұрын
BEST explanation of SVM so far!
@nguyenbaodung16033 жыл бұрын
This is terrifying omg. You approach it soooooo perfectly and all the math behind just guide me to the point that I have to say WOW! Such a good observation, this video is by far golddd. I love your approach at 22:56 so much, you guide me to that point and say, that's the regulization term and I was omg wtf is happening, that's what I was trying to understand all this time and this guy, you, just explain it in a few minutes. Really appreciate
@zullyholly4 жыл бұрын
very succinct way of explaining hyperparameter of eta and c. normally I just take things for granted and just do hyperparameter tuning
@xruan65824 жыл бұрын
Great tutorial. (16:23) "if point is blue, and ap + bq + c > 0", I think the equation should have BLUE color (to indicate the BLUE dash on the graph) rather than RED. Similarly, "if point is red, and ap + bp + c < 0", the equation should be RED (to indicate the RED dash on the graph) instead of BLUE. Pardon me if I am wrong.
@JimmyGarzon5 жыл бұрын
Thank you, this is fantastic! Your visual explanations are great, they’ve really helped understand the intuition of these techniques.
@ignaciosanchezgendriz1457 Жыл бұрын
Luis, tus vídeos son simplemente maravillosos! Pienzo cuanto conociemiento e claridad fue necesário. Quote by Dejan Stojanovic: “The most complicated skill is to be simple.”
@imagnihton22 жыл бұрын
I am way too late here...but so happy to have found a gold mine of information! Amazing explanation!!
@Vikram-wx4hg3 жыл бұрын
Super explanation Luis! It great when someone can bring out the intuitions and meaning behind mathematics in such a clear way!
@RIYASHARMA-he9vz4 жыл бұрын
A very nice explanation of SVM I have ever read.
@rajeshvarma21622 жыл бұрын
Thanks for your easy and understandable explanation
@karanpatel19064 жыл бұрын
Simply awesome...even thank you is not enough to describe how well this video is....explained thoughest things in kids language
@khatiwadaAnish Жыл бұрын
You made complex topic very easily understandable 👍👍
@말바른-e7f3 жыл бұрын
Very insightful lecture. Thank you very much Dr Serrano.
@shrisharanrajaram47664 жыл бұрын
Hats off to you,sir. Very clear with the concept
@samirelzein19784 жыл бұрын
the more you speak the better it gets, please keep giving practical examples of applications at the end of each video
@drewlehe37635 жыл бұрын
This is a great explanation of the concepts, it helped me. But isn't this video about the Support Vector Classifier and not the SVM (which uses kernelization)? The SVC uses the maximal margin classifier, with a budget parameter for errors, and the SVM uses the SVC in an expanded feature space made by kernelization.
@meenakshichoudhary45545 жыл бұрын
Sir, thank you for the video, extremely well explained in short duration. Really appreciable
@krishnanarra55783 жыл бұрын
Awesome.. I liked your videos so much that I bought your book and the book is great too.
@SerranoAcademy3 жыл бұрын
Thank you Krishna, so glad to hear you liked it! ;)
@macknightxu21994 жыл бұрын
16:36, Multiply a,b,c by 0.99, so in the loop, 0.99ap+0.99bq+0.99c is the same with ap+bq+c, so is 0.99 multiply senseless?
@obheech5 жыл бұрын
Very nice explanations.. May your channel flourish !!
@koushikkou21344 жыл бұрын
Mate you're a great teacher
@yasssh78354 жыл бұрын
Best explanation! You got some skills to teach hard things in an easy way.
@KundanKumar-zu1xk3 жыл бұрын
As always excellent and easy to understandable vedio.
@polarbear9863 жыл бұрын
best svm explanation. Thanks a lot!
@MANISHMEHTAIIT5 жыл бұрын
Nice Sir, best teaching style. Love the way you teach...
@08ae60135 жыл бұрын
Thank you very much for this video. As usual you are so good in explaining the complex things in simple way. First time I am able to understand the motive behind SVC and also how it is different from Logistic regression. Can you please make a video on SVM kernels (Polynomial, Gaussian, Radial ...)
@vitor6133 жыл бұрын
HOLY SHIT, BEST EXPLANATION EVER
@frankhendriks26373 жыл бұрын
Hi Luis, Thanks very much for these videos. I watch them with great pleasure. I have some questions though about this one. The questions are preceded by the moment in the video (in mm:ss) where I have my question. 14:26: For determining whether a point is correctly classified, should you compare the red points to the red (dashed) line and the blue points to blue (dashed) line? Or should we compare all points to the black line? I assume it is the first although this is not mentioned explicitly. 22:07: The margin is different when you start with a different value of d in ax+by+c=d. Would you always start with d=1 and -1 or are there situations you start with other values of d (see also my question below)? 27:33: Two questions here. 1) In the second example the margin is actually not increased but decreased. Your video however only talks about expansion, not the opposite. How does reduction of the margin happen? Or does this only work by starting the algorithm with a smaller expansion so with a smaller value of d than 1 in ax+by+c=d? 2) It seems to me that the first solution will also be the result of minimizing the log-loss function as this maximizes the probabilities that a point is classified correctly. So the further the points are away from the line in the correct area, the better it is. And that seems to be the case for the first solution. So what is the difference between this log-loss approach and the SVM approach? Do they deliver different results? If so, when would you choose the one or the other? Thanks, Frank
@witoldsosnowski67645 жыл бұрын
A very good explanation comparing to other available in the Internet
@humzaiftikhar11303 жыл бұрын
Thank you very much for that hard work. it was so informative and well described.
@macknightxu21994 жыл бұрын
I think SVM's loop should use one line ap+bq+c-1>0 for blue points and another line ap+bq+c+1
@olayomateoreynaud99563 жыл бұрын
I think that you are right; I don´t know anythong about SVM (wich is why I ended up here), but I was thinking during the entrie video that it doesn´t make sense to create parallel lines if there are not used.
@EngineeringChampion5 жыл бұрын
Thank you for this simplifying the concepts! I enjoyed watching this video!
@AA-yk8zi3 жыл бұрын
Really good explanation! thank you sir.
@gitadanesh74964 жыл бұрын
Explained very simple. Thanks a lot.
@sandeepgill42822 жыл бұрын
Thanks a lot for such a nice explanation.
@johnrogers12745 жыл бұрын
Efficient, effective and fun. Thanks very much
@PedroTrujilloV3 ай бұрын
Thanks!
@SerranoAcademy3 ай бұрын
Muchas gracias de nuevo @PedroTrujillo! :)
@macknightxu21994 жыл бұрын
in the loop, when do you use the parallel lines?ax+by+c=1 and ax+ bx+c=-1
@Pulorn111 ай бұрын
Thank you for the good explanation. However, I miss some introductions. What is its added value compared to Logistic Regression? And some recommendations on when to prioritize this algorithm against other...
@ocarerepairlab82182 жыл бұрын
Hey Louis, I have recently come across your videos and I am blown away by your simplistic approach to delivering the mathematics and logic especially the mention of the applications. A quick one, DO YOU TAKE STUDENTS, I WOULD LIKE TO ENROLL. I have more interest in analysis of biological data and o rarely find as much good video as this. I'm simply in love with your methods !!!!!
@7anonimo13 жыл бұрын
Thank you for your work Luis!
@yeeunsong34235 жыл бұрын
Thanks for your easy and understandable explanation:)
@bassimeledath22245 жыл бұрын
Legend. Keep doing what you do!
@kimsethseu65963 жыл бұрын
thank you for the good explanation.
@OL8able4 жыл бұрын
Thanks Luis, SVM makes much sense now :)
@ronaktiwari70413 жыл бұрын
You are the best Luis.
@houyao21475 жыл бұрын
I love this so much! Explain in a ver friendly way!
@anujshah6453 жыл бұрын
In the pseudo algorithm of svm, in the last step we multiply a,b,c by 0.99 then even the right hand side should be multiplied by 0.99 making the right hand side to 0.99 and not 1. Am I missing something?
@alyyahfoufy62224 жыл бұрын
Hello, When we multiply the equation by the expanding rate of 0.99, should the right side of the equal be 0.99, 0, and -0.99? Thanks.
@keshavkumar77694 жыл бұрын
what a explanation . Dammn good . you r great sir please make some video on Xgboost and other algorithm also
@dante_calisthenics4 жыл бұрын
And at step 5, I think after add/subtract 0.01, you should also have to do gradient descent, right?
@dilipgawade96865 жыл бұрын
Hello Sir, Do we have video on feature selection ?
@raviankitaava4 жыл бұрын
Would be grateful if you can have explanations on Gaussian Process and hyperparameters optimisation techniques.
4 жыл бұрын
So if data is separable with a large margin, the margin error is small... even though the model produces worse classification than the model with a small margin having a high margin error. is that correct?
@sandipansarkar92113 жыл бұрын
Great explanation
@rakhekhanna4 жыл бұрын
You are an Awesome Teacher. Love you :)
@dante_calisthenics4 жыл бұрын
Can I ask that step of separating line is just only for optimizing the model, right? Like in the case when you have 2 lines have already separated the training data, so you expand the line to see how wide they are?
@rafaelborbacs5 жыл бұрын
How to generalize these algorithms to many dimensions? My problem has about 50 atributes instead of 2, and I need to classify data as "red or blue"
@jaikumaranandapadmanaban15254 жыл бұрын
Hi sir..why parallel lines equated to +1 and -1?
@chetantanwar85614 жыл бұрын
sir also teach the kernal method of it in deepth .
@ruskinchem43003 жыл бұрын
Hi Luis,explaination is great no doubt but the equations that u wrote for margin error should be ax+by=1 and ax+by=-1
@AmitSharma-rj2rp4 жыл бұрын
can someone explain why the margins don't keep diverging infinitely? the final step of the SVM algorithm involves multiplying a, b and c by 0.99. If you keep doing that don't you just get lines that are infinitely far apart? thank you
@iidtxbc5 жыл бұрын
What is the name of the algorithm you have introduced in the lecture?
@gammaturn5 жыл бұрын
Thank you very much for this amazing video. I have come across your channel only recently and I do like your way of explaining these complicated topics. I have got two (hopefully not too dumb) questions regarding SVMs: Given the similarity of SVMs and logistic regression, would it be a good idea to start from an LR-result instead of a random line? Did I understand correctly, that the distance between the two lines can only increase during the search for the best solution? Wouldn't it be conceivable that at some point the combined error function decreases by decreasing the distance between the lines?
@SerranoAcademy5 жыл бұрын
Thank you, great questions! 1. That's a good idea, it's always good to start from an good position rather than a random one. Since the two algorithms are of similar speed (complexity), I'm not sure if starting from LR is necessarily better than just doing an SVM from the start, but it's definitely worth a try. 2. Actually, in the process of moving the line, one could change the coefficients in such a way that the lines get a little closer again (for example, if a and b are both increased in magnitude, the lines get close together).
@mohameddjemai48404 жыл бұрын
Thank you very much for this video.
@farzadfarzadian88275 жыл бұрын
SVM is constrained optimization so it needs Lagrange Multiplier?
@KoreaRwkz4 жыл бұрын
22:00 can anyone derive that expression?
@SerranoAcademy4 жыл бұрын
It takes a bit of calculations, but here's a place where it's done: www.ck12.org/geometry/Distance-Between-Parallel-Lines/lesson/Distance-Between-Parallel-Lines-GEOM/
@AnilAnvesh3 жыл бұрын
Thank You for this video ❤️
@creativeuser9086 Жыл бұрын
Awesome video. Can you do more videos about LLMs?
@SerranoAcademy10 ай бұрын
Thanks for the suggestion! I did some recently, here they are: kzbin.info/aero/PLs8w1Cdi-zva4fwKkl9EK13siFvL9Wewf
@damelilad8755 жыл бұрын
Great Lecture! You need to make a video on how to perform all these algorithms with Scikit-learn package in python
@ikramullahmohmand4 жыл бұрын
very well explained. thanks mate :-)
@ravindrasonavane14694 жыл бұрын
Plz anyone tell me... how 1 n -1 came in equation of line..
@hanfei34684 жыл бұрын
Thanks Luis, great video and explanation! How do you do the animation in the video?
@ardhidattatreyavarma53379 ай бұрын
awesome explanation
@andresalmodovar34733 жыл бұрын
Hi Luis, amazing job. But just one question. Could there be a typo on the criteria for misclassification of points?. I mean, I think the criteria should be: for blue: ap+bq+c>-1, and for red: ap+bq+c
@eisamqassim11694 жыл бұрын
SVM is only for separating the points of two classes only?!
@XunZhong5 ай бұрын
The "Margin Error" part is confusing. Didn't get it.
@scientific-reasoning4 жыл бұрын
Hi Luis, I like your youtube video animations, they are great! Can I know what software you use for animations?
@konradpietras80302 жыл бұрын
In my opinion, suggestion that every iteration margin is increasing, is misleading. If I understood it correctly, margin error is truly making it bigger but there is also classification error which can easily compensate this and overall make margin decrease in a single iteration.
@sriti_hikari4 жыл бұрын
Thank you for that video!
@terryliu36352 жыл бұрын
Great video!!!
@yousufali_285 жыл бұрын
as always well explained.
@sixers3333335 жыл бұрын
I dont really understand why we would use SVMs vs logistic regression. both are used to find the perfect fitting line.
@akashkewar5 жыл бұрын
logistic regression doesn't have the concept of margin maximization, There exist infinite hyperplane that separates the data, but how would you choose a good one? SVM comes into rescue (it generalize well). Also, SVM is not so popular just because It finds the best hyperplane, KERNEL TRICK " dual form of SVM" (taking data to higher dimension without much computation cost (KERNEL FUNCION) and making it linearly separable, imagine two concentric circles dataset) is what made SVM so popular, which logistic regression could not achieve.
@pushkarparanjpe5 жыл бұрын
Great work!
@john225945 жыл бұрын
Nice tutorial. Thank you so much. It would be easy for us if you add code for this algoirthm.