--Intution behind LDA a)It reduces dimensions by constructing new features which are linear combinations of original features. b)It uses eigen decomposition c)It does compression (of features) thereby providing better visualization d)It provides better efficiency. --Different from PCA a) PCA is unsupervised does not consider class labels,whereas ,LDA is supervised considers class labels(hence sometimes provides with better separability between classes) b) PCA finds the axis along the maximum variability of the data whereas LDA finds the axis along which the class separation is maximum. --Intution behind scatter a)The variability within a class is minimized (Inter class scatter). b)The variability between classes is maximized (Between Class scatter). ---Characteristics of LDA a) It is a supervised technique and tries to find axis that separates the classes the most. b) Fischer Discriminant Ratio is the metric that is the ratio of Between class scatter and within class scatter and the intention is to maximize the ratio. c) For c-classes the number of non-zero eigen values is (c-1). Significantly only (c-1) lines are sufficient to separate c classes in this context. d)It assumes all classes are generated from gaussian distribution .(sometimes this may not be true) e)It assumes that the 2 classes have the same covariance matrix. Sir, thank you again for this wonderful initiative of yours.
@abhay9994 Жыл бұрын
Wow, this video on Linear Discriminant Analysis (LDA) by Instructor Saptarsi Goswami is incredibly informative and well-explained. I truly appreciate how he breaks down the concepts and compares LDA to PCA, highlighting the advantages of LDA. The explanations of the Fisher discriminant ratio, inter-class scatter, within-class scatter, and eigenvalue decomposition have given me a solid understanding of LDA. Thank you, Instructor Saptarsi, for sharing your expertise and helping me improve my knowledge in this area!
@dhoomketu7314 жыл бұрын
Exceptionally well explained. Short, crisp and to the point. From a strictly practical standpoint, the great thing about this lecture is that it avoids delving into the exhaustive mathematics for obtaining the solution of the Fisher optimization problem and instead focusses on the critical takeaways from LDA. Even those lengthy and mathematically rigorous NPTEL lectures do not lay as much focus on emphasizing upon the takeaways from LDA or even if they do, halfway through the lecture, the viewer is exhausted to such an extent that he/she is not in a position to fully grasp them. Another great thing about this lecture is that it's class agnostic. If you refer to other lectures on LDA, they try to explain this topic from a binary classification perspective and towards the end, they tell you how the same formulation with minor modifications can be extended to a more than 2 class classification problem which in my opinion convolutes stuff a bit. I Have subscribed to your channel and would recommend your channel to other friends as well. Regards from Dehradun, Uttarakhand.
@SaptarsiGoswami4 жыл бұрын
Thank you so much for your elaborate and generous comment. This will motivate us further.
@austinjohn87134 жыл бұрын
Excellent lecture. You know the stuff, because you broke it down for us to see what is happening. After scouring the net for days for the motivation behind the steps in LDA, i breathe a sigh of relief seeing your video. Thank you for saving me from frustration over something that is actually very simple to understand. Please make more videos. I subscribe and i like!
@SaptarsiGoswami4 жыл бұрын
Thank you so much
@NehaGupta-lk4ou Жыл бұрын
I agree, this video explains the motivation and lays out the steps very simply, cutting out the jargon.
@aashishraina28314 жыл бұрын
excellent. you have the art of providing detailed information so simple.
@SaptarsiGoswami4 жыл бұрын
Thanks a lot. All the best to your journey of ML and DL
@nk-dy4hc9 ай бұрын
Very good explanation. You deserve more subscription sir. KZbin shorts might bring some users. Unfortunately, the algorithm works that way. All the best.
@sofluzik4 жыл бұрын
Never seen this lucid explanation...thank you
@SaptarsiGoswami4 жыл бұрын
Thanks a lot. Please do check our other videos and give your feedback.
@aashishadhikari81443 жыл бұрын
kudos for specifying that m1, m2 and m3 are vectors while using for SB. Many sources do not consider this fact.
@SaptarsiGoswami3 жыл бұрын
Thanks a lot Aashish
@jaikishank4 жыл бұрын
Very good presentation and nice conceptual explanation. Thanks Mr Saptarasi.
@SaptarsiGoswami4 жыл бұрын
Thanks a lot, do checkout our other videos too, like PCA and tSNE if they are of interest to you
@mouleeswarang.s10233 жыл бұрын
This video was highly helpful. Thank you!
@SaptarsiGoswami3 жыл бұрын
Thanks a lot
@thejaswinim.s16912 жыл бұрын
great job...
@SaptarsiGoswami2 жыл бұрын
Thank you
@kausalyaakannan70644 жыл бұрын
Excellent one sir. Thank you
@SaptarsiGoswami4 жыл бұрын
Thanks a lot, do check out our other videos too
@nintishia2 жыл бұрын
Excellent exposition, thanks.
@bruteforce87442 жыл бұрын
excellent video...just a small correction, the mean of the y componets of X1 is 3.6 and not 3.8
@piyukr2 жыл бұрын
It was a very helpful lecture. Thank you, Sir.
@rajdeep12293 жыл бұрын
Very good lecture.
@SaptarsiGoswami3 жыл бұрын
Thanks Rajdeep.
@sudhirm40943 жыл бұрын
A difficult concept explained so simply. Thankyou Sir.
@SaptarsiGoswami3 жыл бұрын
Thanks for putting your comments here. Please do share with interested folks and check , out o our other videos too.
@mmm777ization3 жыл бұрын
No where @11:08 while finding S B you have used the formula mentioned while you started explaining them at beginning that is nowhere the total mean is used
@dendyarmanda44173 жыл бұрын
i'd like to ask, when computing S1 and S2 why the result is devided by four sir? , and when computing Sb in the previous formula it was mentioned Ni , and it was not used, . Need some explain sir
@sofluzik4 жыл бұрын
Really very nice lucid sir...
@SaptarsiGoswami4 жыл бұрын
Thank you so much
@SoumyaDasgupta4 жыл бұрын
Good explanation Saptarshi.
@SaptarsiGoswami4 жыл бұрын
Thank you
@SoumyaDasgupta4 жыл бұрын
@@SaptarsiGoswami lets connect for a good ML project if you are free.
@dr.shaminisrivastava42134 жыл бұрын
Difference between orthogonal and canonical discriminant techniques
@azizullah63603 жыл бұрын
for graphical representation what software u used?
@iitncompany Жыл бұрын
Wring at 10.40 s1 and s2 matrix both calculation wrong .
@abdulrahimshihabuddin11193 жыл бұрын
Is MDA an extension to LDA? Or a different method?
@SaptarsiGoswami3 жыл бұрын
Hey, sorry for the late reply, I missed your question. You can think MDA to be a more generic version of LDA. In LDA, a class comes from a single gaussian distribution, in MDA it can come from multiple gaussian distribution, each may represent a subclass. You may also follow the ink www.sthda.com/english/articles/36-classification-methods-essentials/146-discriminant-analysis-essentials-in-r/
@abdulrahimshihabuddin11193 жыл бұрын
Saptarsi Goswami Sorry sir. I meant Multiple discriminant analysis. Is that a different method ? Are mixture discriminant analysis and multiple discriminant analysis the same?
@SaptarsiGoswami3 жыл бұрын
Not much is available about this. May be I will check Duda's book and comment, I am tempted to tell this is LDA with multi class, but I will hold my comments before I am sure.