19:10 what is the point of getting the biggest variance? why not the smallest?
@0xLoneWolf4 жыл бұрын
I'm not sure why we stretch the data points, but here's an example of why you want an axis that contains points of as large variance as possible. Say you wanted to find the relationship between calorie consumption and IQ. If everyone who you sample have an IQ of 100, no matter the consumption, there is nothing for calorie consumption to explain. But if there is a big difference in people's IQ depending on calorie consumption then you can use that to find a relationship between the two. Conversely, if you only look at groups of people who consume x calories vs who consume x+1 calories (x has little variance), there is not much variation in x to explain wild fluctuations in IQ. You want both to have high variance, ie high covariance. When you input y = cx + b, you want the x variable to have as much variance as possible so you can use it to explain as much variance in y as possible. In PCA you will find as many axis as there are variables but in practice you remove the ones that have the least explaining ability (variance) so that you can reduce the number of variables that yield y. It makes calculations easier, and helps machine learning algorithms learn, among other reasons (google dimensionality reduction).
@avinashkumar68845 жыл бұрын
thanks for simple explanation sir... its really helpful...
@shahulrahman25168 ай бұрын
Great Video, Thank you
@VictoriaOtunsha2 жыл бұрын
Thank you for this breakdown
@groundrail6 жыл бұрын
Sweet summarization of covariance relation to eigen-vectors. I'm still looking for summarization of meaning of life.
@JL-XrtaMayoNoCheese3 ай бұрын
Eastern Orthodoxy is the meaning of life
@adamkolany1668 Жыл бұрын
@~7:45 In covariance you have to subtract the mean values.
@Mr688105 жыл бұрын
Thanx for simple explanation. It clearly helps to build intuitive understanding
@keshavkumar77693 жыл бұрын
Hey your content is good Please make more videos
@joshuaronisjr5 жыл бұрын
Hi, and thanks for your video - it's great! I just have one question - I understand that the covariance matrix transforms datapoints in the direction of its eigenvectors. Additionally, I know the eigenvectors will be orthogonal, since the covariance matrix is symmetric. What I don't understand is how we know the eigenvectors of the covariance matrix are the directions of maximum variance...
@kartiks94895 жыл бұрын
Eigenectors of a matrix, are those vectors which aren't knocked off their span. In this case we know that these are the vectors in which the co-variance matrix shears the data along.
@joshuaronisjr5 жыл бұрын
@@kartiks9489 I understand that. But it all seems to come from the calculation. I couldn't intuitively have seen it...so I'm wondering if there was an intuitive reason why the eigenvectors of the covariance matrix turn out to be the directions to project onto that have the maximum variance...
@rahuldeora58155 жыл бұрын
I found many complicated answers to this. Did you find a simple answer from intutive standpoint?
@azadalmasov58494 жыл бұрын
Thanks for the question challenged me. To understand this translate those points into the coordinate system of which basis are orthonormal eigenvectors. So your transformed points by transformation matrix A in your standard coordinate system are Ax. If you want to see the coordinates of these transformed points in any other coordinate system, you simply multiply these points (vectors) by the inverse of matrix of which columns are the basis of new coordinate system (here it is V each column are eigenbasis). Before doing this, show your matrix A with eigenvalue expansion=VЛinv(V). Then do your change of basis = inv(V)VЛ inv(V). From the equation you can see that any other matrices of coordinate system will not give higher value than this. Because other basis vectors will have angle between these eigenbasis and this multiplication will reduce by cos(alpha).
@HamidKarimiDS3 жыл бұрын
It is coming from the finding an axis along which the variance of the projected data is maximized. Solving this optimization naturally leads to eignvector of covariance matrix being that axis
@rakkaalhazimi36723 жыл бұрын
Nice video man
@youbeyou9769 Жыл бұрын
Why did we choose eigen value and not ant other directors
@khaledkedjar36166 жыл бұрын
do you have some Tutorial about Neural network
@acalza6 жыл бұрын
No comments, but fuck, thank you. This was really good, despite some hiccups throughout, but man thank you. My 'Top 20" uni can't even teach this properly.
@sreecharan43006 жыл бұрын
thanks a lot for clearing my concepts
@the_dark_kerm Жыл бұрын
Noice!!
@hfz.arslan3 жыл бұрын
hi can I get the slides please?
@3oclockwork4546 жыл бұрын
THE BEST
@istanbulower5 жыл бұрын
4:03 instead of variance , deviation may be used to express the spread of data from the mean. thanks for the video