the Meaning of Covariance Matrix and PCA

  Рет қаралды 30,943

Exploring the Meaning Of Math

Exploring the Meaning Of Math

Күн бұрын

Пікірлер: 30
@divyab592
@divyab592 Жыл бұрын
Very good visual representation of PCA. Thank you
@shahainmanujith2109
@shahainmanujith2109 2 ай бұрын
Fanstatic!
@danialyntykbay8057
@danialyntykbay8057 2 жыл бұрын
Great explanation! Thank you
@johnnybatafljeska6368
@johnnybatafljeska6368 5 жыл бұрын
19:10 what is the point of getting the biggest variance? why not the smallest?
@0xLoneWolf
@0xLoneWolf 4 жыл бұрын
I'm not sure why we stretch the data points, but here's an example of why you want an axis that contains points of as large variance as possible. Say you wanted to find the relationship between calorie consumption and IQ. If everyone who you sample have an IQ of 100, no matter the consumption, there is nothing for calorie consumption to explain. But if there is a big difference in people's IQ depending on calorie consumption then you can use that to find a relationship between the two. Conversely, if you only look at groups of people who consume x calories vs who consume x+1 calories (x has little variance), there is not much variation in x to explain wild fluctuations in IQ. You want both to have high variance, ie high covariance. When you input y = cx + b, you want the x variable to have as much variance as possible so you can use it to explain as much variance in y as possible. In PCA you will find as many axis as there are variables but in practice you remove the ones that have the least explaining ability (variance) so that you can reduce the number of variables that yield y. It makes calculations easier, and helps machine learning algorithms learn, among other reasons (google dimensionality reduction).
@avinashkumar6884
@avinashkumar6884 5 жыл бұрын
thanks for simple explanation sir... its really helpful...
@shahulrahman2516
@shahulrahman2516 8 ай бұрын
Great Video, Thank you
@VictoriaOtunsha
@VictoriaOtunsha 2 жыл бұрын
Thank you for this breakdown
@groundrail
@groundrail 6 жыл бұрын
Sweet summarization of covariance relation to eigen-vectors. I'm still looking for summarization of meaning of life.
@JL-XrtaMayoNoCheese
@JL-XrtaMayoNoCheese 3 ай бұрын
Eastern Orthodoxy is the meaning of life
@adamkolany1668
@adamkolany1668 Жыл бұрын
@~7:45 In covariance you have to subtract the mean values.
@Mr68810
@Mr68810 5 жыл бұрын
Thanx for simple explanation. It clearly helps to build intuitive understanding
@keshavkumar7769
@keshavkumar7769 3 жыл бұрын
Hey your content is good Please make more videos
@joshuaronisjr
@joshuaronisjr 5 жыл бұрын
Hi, and thanks for your video - it's great! I just have one question - I understand that the covariance matrix transforms datapoints in the direction of its eigenvectors. Additionally, I know the eigenvectors will be orthogonal, since the covariance matrix is symmetric. What I don't understand is how we know the eigenvectors of the covariance matrix are the directions of maximum variance...
@kartiks9489
@kartiks9489 5 жыл бұрын
Eigenectors of a matrix, are those vectors which aren't knocked off their span. In this case we know that these are the vectors in which the co-variance matrix shears the data along.
@joshuaronisjr
@joshuaronisjr 5 жыл бұрын
@@kartiks9489 I understand that. But it all seems to come from the calculation. I couldn't intuitively have seen it...so I'm wondering if there was an intuitive reason why the eigenvectors of the covariance matrix turn out to be the directions to project onto that have the maximum variance...
@rahuldeora5815
@rahuldeora5815 5 жыл бұрын
I found many complicated answers to this. Did you find a simple answer from intutive standpoint?
@azadalmasov5849
@azadalmasov5849 4 жыл бұрын
Thanks for the question challenged me. To understand this translate those points into the coordinate system of which basis are orthonormal eigenvectors. So your transformed points by transformation matrix A in your standard coordinate system are Ax. If you want to see the coordinates of these transformed points in any other coordinate system, you simply multiply these points (vectors) by the inverse of matrix of which columns are the basis of new coordinate system (here it is V each column are eigenbasis). Before doing this, show your matrix A with eigenvalue expansion=VЛinv(V). Then do your change of basis = inv(V)VЛ inv(V). From the equation you can see that any other matrices of coordinate system will not give higher value than this. Because other basis vectors will have angle between these eigenbasis and this multiplication will reduce by cos(alpha).
@HamidKarimiDS
@HamidKarimiDS 3 жыл бұрын
It is coming from the finding an axis along which the variance of the projected data is maximized. Solving this optimization naturally leads to eignvector of covariance matrix being that axis
@rakkaalhazimi3672
@rakkaalhazimi3672 3 жыл бұрын
Nice video man
@youbeyou9769
@youbeyou9769 Жыл бұрын
Why did we choose eigen value and not ant other directors
@khaledkedjar3616
@khaledkedjar3616 6 жыл бұрын
do you have some Tutorial about Neural network
@acalza
@acalza 6 жыл бұрын
No comments, but fuck, thank you. This was really good, despite some hiccups throughout, but man thank you. My 'Top 20" uni can't even teach this properly.
@sreecharan4300
@sreecharan4300 6 жыл бұрын
thanks a lot for clearing my concepts
@the_dark_kerm
@the_dark_kerm Жыл бұрын
Noice!!
@hfz.arslan
@hfz.arslan 3 жыл бұрын
hi can I get the slides please?
@3oclockwork454
@3oclockwork454 6 жыл бұрын
THE BEST
@istanbulower
@istanbulower 5 жыл бұрын
4:03 instead of variance , deviation may be used to express the spread of data from the mean. thanks for the video
@STWNoman
@STWNoman 2 жыл бұрын
Very difficult to understand
@kongki7563
@kongki7563 5 жыл бұрын
성이 여씨이신 한국인
Principal Component Analysis (PCA)
26:34
Serrano.Academy
Рет қаралды 421 М.
Lecture: Principal Componenet Analysis (PCA)
51:13
AMATH 301
Рет қаралды 174 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН
BAYGUYSTAN | 1 СЕРИЯ | bayGUYS
36:55
bayGUYS
Рет қаралды 1,9 МЛН
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН
Covariance, Clearly Explained!!!
22:23
StatQuest with Josh Starmer
Рет қаралды 579 М.
Principal Component Analysis (The Math) : Data Science Concepts
13:59
The covariance matrix
13:57
Serrano.Academy
Рет қаралды 102 М.
The Biggest React Framework You've Never Heard of
20:29
Theo - t3․gg
Рет қаралды 51 М.
Statistics 101: The Covariance Matrix
17:32
Brandon Foltz
Рет қаралды 274 М.
MATLAB SVM tutorial (fitcsvm)
24:49
Exploring the Meaning Of Math
Рет қаралды 113 М.
What is a Jacobian Matrix | Physical Interpretation
12:23
GradXY
Рет қаралды 13 М.
The Covariance Matrix : Data Science Basics
11:00
ritvikmath
Рет қаралды 245 М.
StatQuest: PCA in Python
11:37
StatQuest with Josh Starmer
Рет қаралды 210 М.
Леон киллер и Оля Полякова 😹
00:42
Канал Смеха
Рет қаралды 4,7 МЛН