Cant find a better lecture on SVM than this. World class. Better than MIT and stanford anyday.
@nilarunm7 жыл бұрын
I have listened to SVM e-classes from various sources, including Stanford,but this is the most explanatory and understandable, Sir teaches from realization, not from comprehension
@varunmanjunath62043 жыл бұрын
exactly.India is underestimated in terms of education
@karthik-ex4dm5 жыл бұрын
Lectures from Stanford, MIT etc etc cannot come anywhere near this explanation!!!! Not only in ML, But in most of the topics #BestInTheWorld
@sanjaykumarbahuguna40302 жыл бұрын
I have done lot of search to understand SVM & I found you are the best. Thanks sir.
@vichitrakumar44526 жыл бұрын
i have tried understanding it from so many sources, this one made me understand, far better than any other source on the internet, Thanks!!
@imalive4045 жыл бұрын
kzbin.info/www/bejne/lYHamZyNra1-btE
@santhoshnarayanan3436 жыл бұрын
So far the best best lecture making understand each and every aspects of Support Vector Machine theoretically.
@grafdp9 жыл бұрын
Very well explained, I don't understand why this hasn't much more clicks. A true saviour.
@siddharthabaral42716 жыл бұрын
Why cant other videos be this simple ? This is just brilliant.
@PankajKumar-tr7bg Жыл бұрын
Gr8 resource to watch far better then MIt Stanford.
@adbcify7 жыл бұрын
This is far better than MIT or Stanford... seriously!
@imalive4045 жыл бұрын
This video is good. but you might want to check this kzbin.info/www/bejne/lYHamZyNra1-btE
@generationwolves5 жыл бұрын
@@imalive404 Or this: kzbin.info/www/bejne/n5yadmqoZ8Zknpo
@geethujoyanand6 жыл бұрын
The best lecture on SVM.thank you Sir.
@binnysplaytime46408 ай бұрын
Excellent class Sir
@williamstephenjones38632 ай бұрын
Brilliant!
@sandhyapotadar3583 жыл бұрын
Awesome explanation sir. Thank you
@MohammedHarisSumair7 жыл бұрын
This lecture was very easy to comprehend and was also enjoyable.
@meens19867 жыл бұрын
Awesome explanation..thanks a ton for making it so clear
@vishaldhawan4206 жыл бұрын
SVM starts from 5:13
@shivang79895 жыл бұрын
Thanks
@mazharulislam76193 жыл бұрын
Excellent Sir
@aniruddhg56416 жыл бұрын
Wonderfully explained Sir. Thank you very much.
@Code-095 жыл бұрын
Best lecture. Thank you very much.
@gauravsharma-mi2er7 жыл бұрын
One of the best video.
@bibhuprasadsahu55956 жыл бұрын
good explanation of SVM.
@JourneyMindMap4 жыл бұрын
Thank you, difficult to find the clear explaination like this
@shivendrapanicker82337 жыл бұрын
Awesome lecture.
@adityajoshi58 жыл бұрын
Intro music sounded like background score in romantic scene of a 90's movie.
@TonySahoo926 жыл бұрын
To me it sounds more like a 'thriller'
@abhishek0071234 жыл бұрын
😂😂😂😂😂😂😂
@bishwanathlalkarn82036 жыл бұрын
thank you...it's awesome lecture... it's really help me in my thesis work....do you have any lecture video on Fisher Kernel.....
@RKDataAnalytics7 жыл бұрын
Appreciate the efforts, Thanks
@AnilSingh-vx4zs6 жыл бұрын
watch from 27:36
@insoucyant7 жыл бұрын
Wonderful class. Thank You.
@adewolekayode61489 жыл бұрын
Thank you so much sir for this lesson. Sir, please can you present a tutorial on TSVM? PLS
@rakhsupddr10348 жыл бұрын
Thank you sir , the lecture was really good. I have a doubt though : *) We had to maximize the distance between 2 support vectors (1 +ve and other -ve), that's why we minimized 'W' (because it was inversely proportional to the distance), but at 46:46 (time of the video) you told to MAXimize the 'Lagrangian' exp (which is directly proportional to 'W') to find the 'alpha_sub_i' 's. I didn't find any reason for this Maximization in particular (As it has already been MINimized to substitute for 'b' and 'W').
@kamaluddindimas16616 жыл бұрын
its a dual problem bro.. you can check in on calculus subject
@eswan76386 жыл бұрын
Excellent
@SumitChauhan009 Жыл бұрын
How did you write the value of b margin as that ?
@sanjeevdhewa96704 жыл бұрын
for svm start from 5:18
@bicepjai8 жыл бұрын
good one
@shailendra92926 жыл бұрын
At 5:20 Support vector machine
@gou-goutham10 жыл бұрын
Thank you sir....
@harshinisewani50954 жыл бұрын
at 44.35 the summation having alpha and y of i why it didnt gets cancel.
@nishugarg74669 жыл бұрын
thank you sir
@PSNAcademy6 жыл бұрын
At 43:29, the 2nd term is getting cancelled due to sum(aplha[i].y[i])=0. Then why not the 3rd term term?
@kamaluddindimas16616 жыл бұрын
the 2nd term can be cancelled because b is an constant, so the the sum now consist of alpha i and y i. but in the 3rd term, the sum consisit of alpha, yi and w...
@nikhilashodariya7 жыл бұрын
Sir, can you please tell why at 44.59 (time of the video) for W.W two different subscript was used why can't we use alpha_i*alpha_i * y_i*y_i * x_i*x_i instead of alpha_i*alpha_j * y_i*y_j * x_i*x_j
@prof.manjeetsinghjcboseust90343 жыл бұрын
it is for more clarity in expression
@dilipparmar77159 жыл бұрын
Thanku you !! Appreciates !!
@umarfarook77736 жыл бұрын
thank you sir .how can we find the value of alpha(i)?
@kamaluddindimas16616 жыл бұрын
when you expand the maximize of langrangian dual... you can get new equation, and take derivation of alpha.. and you can compute the alpha
@aishwaryanarkar29545 жыл бұрын
thank yuuuu so much :)
@suqbah18199 жыл бұрын
how did he get the equation for b?
@rakhsupddr10348 жыл бұрын
You can find that out once you know the support vector(s). According to THIS video we find all the 'alpha_sub_i' and we know all the 'X_sub_i' as well as corresponding 'Y_sub_i' and hence we find the 'W' which is nothing but the summation of 'alpha' 'X' and 'Y' . After this for any support vector we know 'WX + b = +1/-1' (so we know 'W' ,'X' and can find 'b'.
@arnimachaurasia79 жыл бұрын
thank you sir.... :-)
@venukarthikboddu70316 жыл бұрын
Can any one explain how this SVM can be implemented practically like by taking a dataset containing reviews and building this SVM model for classifying reviews into positive and negative classes???????
@sanketdamane26426 жыл бұрын
Sentdex have good videos on youtube for svm using python
@K_SE__VishalRoyRoy2 жыл бұрын
Can any one who could guide me how to make a roadmap to learn machine learning
@hassuunna5 жыл бұрын
1.5X speed
@sidhantthole41856 жыл бұрын
46:45 it should be minimization
@shubhamvashisth95184 жыл бұрын
No, it will be maximize only
@harshinisewani50954 жыл бұрын
@@shubhamvashisth9518 why it will be maximized? please can you explain
@shubhamvashisth95184 жыл бұрын
Because it's the property of the Lagrangian method to maximize factors.
@vishalsahu42333 жыл бұрын
u bores students a lot! tomr is my exam and I am wasting time in this.