this kind of has this old video kind of vibe, where it's an ancient recording of someone talking and they just explain the thing and you know, just from listening, that anyone else that is listening to it will also have no problem understanding it, because the explanation is just so good
@dom60028 ай бұрын
It's remarkable how inept professors are at explaining the simplest of concepts. You have surpassed most of mine, thank you very much.
@yee63658 ай бұрын
Well this is an applied statistics course, so it's way more useful than most theoretical ones
@tyronelagore14792 жыл бұрын
BEAUTIFULLY Explained. It would have been great to see the transformed plot to understand the effect it has, though you did explain it quite well verbally.
@Nobody-md5kt Жыл бұрын
This is fantastic. I'm a software engineer currently learning about why our cosine similarity functions aren't doing so hot on our large embeddings vector for a large language model. This helps me understand what's happening behind the scenes much better. Thank you!
@cupckae14 ай бұрын
Can you share your observations regarding the research?
@lbognini3 ай бұрын
This is what really makes the world unfairer: when you take advantage of what someone else shared to untangle something and you don't even want to share with others how you did it.
@anthonykoedyk7152 жыл бұрын
Thank you for explaining the link between eigen vectors and mahalnobis distance. Been learning both with no linkage between them!
@LuisRIzquierdo3 жыл бұрын
Great video, thank you so much!! Just a minor comment that you probably know, but I think it was not clear in the video at around 8:27: eigenvalues do not have to be integers, they can be scalar (in general, they are complex numbers), and the set of eigenvalues are a property of the linear transformation (i.e. of the matrix). You can scale any eigenvector, and it will still have the same eigenvalue associated with it. In any case, thank you so much for your excellent video!
@qqq_Peace5 жыл бұрын
Excellent explanation of scaling covariance within the data. And linking it to PCA is nice to understand the behind ideas!
@monta78347 жыл бұрын
Great introduction to the problem and explanation of the basis. Wish I could have found this earlier before having wasted so much time going through those videos/articles done by people who could only tell complicated stuff in more complicated manners.
@chelseyli74782 жыл бұрын
Thank you!. You made me clear about eigenvector ,eigenvalues and Mahalanobis distance. Best video on these topics.
@jonaspoffyn7 жыл бұрын
Small remark: at the slide where you do the matrix by vector multiplication (@6:42) the colours are definitely wrong. The results are correct but the colours for both rows should be: black*red+grey*blue
@cries31682 жыл бұрын
Great video, love you style of explanation, really good to follow along! Much better than my stats lecturer!
@tinAbraham_Indy9 ай бұрын
I truly enjoy watching this tutorial. Thank you
@1982Dibya8 жыл бұрын
Great Video..But could you please explain how inverse covariance and eigen vector relate to mahalanobis distance in detail..That would be very helpful
@PD-vt9fe4 жыл бұрын
I have the same question. After doing some research, it turns out that eigenvectors can help with the multiplication step. More specifically, symmetric S can be written as S = P * D * P_T; P consists of eigenvectors and it's an orthogonal matrix, D is a diagonal matrix with eigenvalues, and P_T is the transpose matrix of P. It can help to speed up the calculation.
@seyedmahdihosseini67484 жыл бұрын
Perfect explanation. thorough understanding of underlying mathematics concepts
@mojtabakhayatazad2944 Жыл бұрын
A very good video for anyone who wants to feel math like physics
@vishaljain4915 Жыл бұрын
Could not have gotten confused even if i tried to, really clear explanation
@souravde61164 жыл бұрын
Lots of doubt. Q1) If x is a vector defined by x = [x1;x2;x3...;xn], what will be size of covariance matrix C? Q2) If x is a matrix of M-by-N dimension, where M is no. of the state vectors and N is the total no. of respective observations of each vector in a different time instant, then how to calculate Mahalanobis norm and what is its final size of D and what is the inference we can get from this metric? Q3) If x is a matrix of N-by-N dimension, then also how to calculate Mahalanobis norm and what is its final size of D and what is the inference we can get from this metric?
@pavster34 жыл бұрын
Excellent video - very clear. THanks very much for posting
@aashishadhikari81443 жыл бұрын
Came to learn Mahalanobis distance, understood wny Mahalanobis distance is defined that way, what PCA does. :D Thanks.
@alvarezg.adrian8 жыл бұрын
Great! Understanding concepts is better than copy formulas. Thank you for your conceptual explanation.
@vangelis99113 жыл бұрын
Good job in explaining a rather complicated concept, thank you
@sheenanasim7 жыл бұрын
Wonderful explanation!! Even the very beginner can pick this up. Thanks!
@sanjaykrish87193 жыл бұрын
Simply superbb.. You made my day
@liuzeyuan2 жыл бұрын
very explained thank you so much matt
@pockeystar7 жыл бұрын
How is this inverse of covariance matrix linked with shrinkage on the eigenvector?
@anindadatta1643 жыл бұрын
A clear statement of conclusion in the video would have been appreciated by beginers e.g MD is Z square score of a multivariate sample, calculated after removing the collinearity among the variables.
@linduchyable8 жыл бұрын
Hello, is the process of removing outliers from a variable more than one time considered manipulating or changing the data?i have loans for public. its mean .17093 st.dv .955838 skewness 7.571 kurtosis 61.436 most of the cases of this loan is an outliers after several times of ranking and replacing the missing values with the mean i reach this output mean .2970 stdv .22582 skewness 2.301 kurtisos 3.885 and it ends ub to be positively skewed. i dont know what to do shall i keep it this way or take the first one or do i have to continue knowing that the percentiles 5, 10, 25,50 and 75 ends up with the same number. please help:(
@s3d8714 жыл бұрын
Great job, saved my time a lot!
@leonardocerliani34793 жыл бұрын
Amazing video! Thank you so much!
@zaphbeeblebrox53332 жыл бұрын
"Square n-dim matrices have n eigenvectors". Not true. eg. a matrix that represents a rotation has no eigenvalues or eigenvectors.
@colinweaver20973 жыл бұрын
Is there a good textbook that covers this?
@muskduh2 жыл бұрын
Thanks for the video
@ojussinghal25012 жыл бұрын
This video is such a gem 🤓
@bautistabaiocchi-lora13393 жыл бұрын
Really well explained. Thank you.
@deashehu25918 жыл бұрын
Thank you Sir ! We need more intuition and less formulas. Please do more videos....
@muratcan__226 жыл бұрын
Why do we need to remove the covariance in the data?
@sacman30018 жыл бұрын
Awesome explanation! Thank you for posting!
@thinhphan54045 жыл бұрын
Thank you. This video help me a lot.
@HyunukHa3 жыл бұрын
Clear explanation.
@MiGotham4 жыл бұрын
Multiplication with the eigenvector doesn't necessarily have to be an integer multiplied with the eigenvector?! It could be any scalar?
@raditz24883 жыл бұрын
@7:35 may be there is a typo and the eigen vectors are wrongly put in. The eigen vectors as per my calculations are [-0.85623911 -0.5165797 ] and [ 0.5165797 -0.85623911]. Can any one verify this?
@oldfairy4 жыл бұрын
Thank you, Great explanation. subscribed your channel after this video
@StefanReii4 жыл бұрын
Well explained, thank you!
@deepakjain44815 ай бұрын
thanks a lot
@shourabhpayal11983 жыл бұрын
Amazing sir
@domenicodifraia73384 жыл бұрын
Great video man! Thanks a lot! : )
@the_iurlix6 жыл бұрын
So clear!! Thanks man!
@ajeetis8 жыл бұрын
Nicely explained. Thank you!
@kamilazdybal6 жыл бұрын
Great video, thank you!
@TheGerakas7 жыл бұрын
Your voice sounds like Tom Hanks!
@MrPorkered6 жыл бұрын
more like iron man
@XarOOraX Жыл бұрын
This story seems straight forward - yet, after 8 minutes I still am clueless as where it is going to lead. Maybe it is just me, but when I need to learn something, I don't want a long tension arc: Oh, what is going to happen next... I want to start with a great picture of what is going to happen, and then fill in the details one after another, so I can sit and marvel, how the big initial problem step by step dissolves into smaller and understandable pieces. Inversing the story, starting from the conclusion, going to the basics also allows to stop once you understood enough.
@thuongdinh59908 жыл бұрын
awesome job ,thank you!
@KayYesYouTuber5 жыл бұрын
Beautiful explanation. thank you
@danspeed934 жыл бұрын
Thanks clear
@bettys72985 жыл бұрын
Hi Matthew, I do have a problem when using R to compute it. Could you help me fixing the problem? Thank you so much in advance! Here's the error and how I tried to fix it but failed: 1. the error: > mahal = mahalanobis(x, + colMeans(x) + cov(x, use="pairwise.complete.obs")) Error: unexpected symbol in: " colMeans(x) cov" 2. the fix: is.array(nomiss[, -c(1,2)]) (----->result= False) x
@lydiakoutrouditsou85145 жыл бұрын
you've created an object called temArray, and then tried to run the analysis on an object called temPArray?
@1982Dibya8 жыл бұрын
Could you please explain how Mahalanobis distance is related to Eigen vector.The video is very good and helpful but if you could explain how to use it from Eigen vector
@MatthewEClapham8 жыл бұрын
The eigen vector is a direction. Essentially, the points are rescaled by compressing them in the eigenvector directions, but by different amounts along each eigenvector. This removes covariance in the data. That's basically what the Mahalanobis distance does.
@muratcan__226 жыл бұрын
@@MatthewEClapham Why do we need to remove the covariance in the data in the first place?
@bhupensinha37675 жыл бұрын
@@muratcan__22 : Hope you have the answer by now !!!
@cesarvillalobos17785 жыл бұрын
@@muratcan__22 The Euclidean distance problem.
@cesarvillalobos17785 жыл бұрын
@@muratcan__22 Going a little in deep: The covariance is a property of random variables, but for use Euclidean distance you have a set of points with its positions and the distance between them namely you dont have random variables, so doesnt make sense talk about covariance. The trick is: random variables.