23: Mahalanobis distance

  Рет қаралды 158,178

Matthew E. Clapham

Matthew E. Clapham

Күн бұрын

Пікірлер: 71
@SylvaineTropea
@SylvaineTropea 2 күн бұрын
this kind of has this old video kind of vibe, where it's an ancient recording of someone talking and they just explain the thing and you know, just from listening, that anyone else that is listening to it will also have no problem understanding it, because the explanation is just so good
@dom6002
@dom6002 8 ай бұрын
It's remarkable how inept professors are at explaining the simplest of concepts. You have surpassed most of mine, thank you very much.
@yee6365
@yee6365 8 ай бұрын
Well this is an applied statistics course, so it's way more useful than most theoretical ones
@tyronelagore1479
@tyronelagore1479 2 жыл бұрын
BEAUTIFULLY Explained. It would have been great to see the transformed plot to understand the effect it has, though you did explain it quite well verbally.
@Nobody-md5kt
@Nobody-md5kt Жыл бұрын
This is fantastic. I'm a software engineer currently learning about why our cosine similarity functions aren't doing so hot on our large embeddings vector for a large language model. This helps me understand what's happening behind the scenes much better. Thank you!
@cupckae1
@cupckae1 4 ай бұрын
Can you share your observations regarding the research?
@lbognini
@lbognini 3 ай бұрын
This is what really makes the world unfairer: when you take advantage of what someone else shared to untangle something and you don't even want to share with others how you did it.
@anthonykoedyk715
@anthonykoedyk715 2 жыл бұрын
Thank you for explaining the link between eigen vectors and mahalnobis distance. Been learning both with no linkage between them!
@LuisRIzquierdo
@LuisRIzquierdo 3 жыл бұрын
Great video, thank you so much!! Just a minor comment that you probably know, but I think it was not clear in the video at around 8:27: eigenvalues do not have to be integers, they can be scalar (in general, they are complex numbers), and the set of eigenvalues are a property of the linear transformation (i.e. of the matrix). You can scale any eigenvector, and it will still have the same eigenvalue associated with it. In any case, thank you so much for your excellent video!
@qqq_Peace
@qqq_Peace 5 жыл бұрын
Excellent explanation of scaling covariance within the data. And linking it to PCA is nice to understand the behind ideas!
@monta7834
@monta7834 7 жыл бұрын
Great introduction to the problem and explanation of the basis. Wish I could have found this earlier before having wasted so much time going through those videos/articles done by people who could only tell complicated stuff in more complicated manners.
@chelseyli7478
@chelseyli7478 2 жыл бұрын
Thank you!. You made me clear about eigenvector ,eigenvalues and Mahalanobis distance. Best video on these topics.
@jonaspoffyn
@jonaspoffyn 7 жыл бұрын
Small remark: at the slide where you do the matrix by vector multiplication (@6:42) the colours are definitely wrong. The results are correct but the colours for both rows should be: black*red+grey*blue
@cries3168
@cries3168 2 жыл бұрын
Great video, love you style of explanation, really good to follow along! Much better than my stats lecturer!
@tinAbraham_Indy
@tinAbraham_Indy 9 ай бұрын
I truly enjoy watching this tutorial. Thank you
@1982Dibya
@1982Dibya 8 жыл бұрын
Great Video..But could you please explain how inverse covariance and eigen vector relate to mahalanobis distance in detail..That would be very helpful
@PD-vt9fe
@PD-vt9fe 4 жыл бұрын
I have the same question. After doing some research, it turns out that eigenvectors can help with the multiplication step. More specifically, symmetric S can be written as S = P * D * P_T; P consists of eigenvectors and it's an orthogonal matrix, D is a diagonal matrix with eigenvalues, and P_T is the transpose matrix of P. It can help to speed up the calculation.
@seyedmahdihosseini6748
@seyedmahdihosseini6748 4 жыл бұрын
Perfect explanation. thorough understanding of underlying mathematics concepts
@mojtabakhayatazad2944
@mojtabakhayatazad2944 Жыл бұрын
A very good video for anyone who wants to feel math like physics
@vishaljain4915
@vishaljain4915 Жыл бұрын
Could not have gotten confused even if i tried to, really clear explanation
@souravde6116
@souravde6116 4 жыл бұрын
Lots of doubt. Q1) If x is a vector defined by x = [x1;x2;x3...;xn], what will be size of covariance matrix C? Q2) If x is a matrix of M-by-N dimension, where M is no. of the state vectors and N is the total no. of respective observations of each vector in a different time instant, then how to calculate Mahalanobis norm and what is its final size of D and what is the inference we can get from this metric? Q3) If x is a matrix of N-by-N dimension, then also how to calculate Mahalanobis norm and what is its final size of D and what is the inference we can get from this metric?
@pavster3
@pavster3 4 жыл бұрын
Excellent video - very clear. THanks very much for posting
@aashishadhikari8144
@aashishadhikari8144 3 жыл бұрын
Came to learn Mahalanobis distance, understood wny Mahalanobis distance is defined that way, what PCA does. :D Thanks.
@alvarezg.adrian
@alvarezg.adrian 8 жыл бұрын
Great! Understanding concepts is better than copy formulas. Thank you for your conceptual explanation.
@vangelis9911
@vangelis9911 3 жыл бұрын
Good job in explaining a rather complicated concept, thank you
@sheenanasim
@sheenanasim 7 жыл бұрын
Wonderful explanation!! Even the very beginner can pick this up. Thanks!
@sanjaykrish8719
@sanjaykrish8719 3 жыл бұрын
Simply superbb.. You made my day
@liuzeyuan
@liuzeyuan 2 жыл бұрын
very explained thank you so much matt
@pockeystar
@pockeystar 7 жыл бұрын
How is this inverse of covariance matrix linked with shrinkage on the eigenvector?
@anindadatta164
@anindadatta164 3 жыл бұрын
A clear statement of conclusion in the video would have been appreciated by beginers e.g MD is Z square score of a multivariate sample, calculated after removing the collinearity among the variables.
@linduchyable
@linduchyable 8 жыл бұрын
Hello, is the process of removing outliers from a variable more than one time considered manipulating or changing the data?i have loans for public. its mean .17093 st.dv .955838 skewness 7.571 kurtosis 61.436 most of the cases of this loan is an outliers after several times of ranking and replacing the missing values with the mean i reach this output mean .2970 stdv .22582 skewness 2.301 kurtisos 3.885 and it ends ub to be positively skewed. i dont know what to do shall i keep it this way or take the first one or do i have to continue knowing that the percentiles 5, 10, 25,50 and 75 ends up with the same number. please help:(
@s3d871
@s3d871 4 жыл бұрын
Great job, saved my time a lot!
@leonardocerliani3479
@leonardocerliani3479 3 жыл бұрын
Amazing video! Thank you so much!
@zaphbeeblebrox5333
@zaphbeeblebrox5333 2 жыл бұрын
"Square n-dim matrices have n eigenvectors". Not true. eg. a matrix that represents a rotation has no eigenvalues or eigenvectors.
@colinweaver2097
@colinweaver2097 3 жыл бұрын
Is there a good textbook that covers this?
@muskduh
@muskduh 2 жыл бұрын
Thanks for the video
@ojussinghal2501
@ojussinghal2501 2 жыл бұрын
This video is such a gem 🤓
@bautistabaiocchi-lora1339
@bautistabaiocchi-lora1339 3 жыл бұрын
Really well explained. Thank you.
@deashehu2591
@deashehu2591 8 жыл бұрын
Thank you Sir ! We need more intuition and less formulas. Please do more videos....
@muratcan__22
@muratcan__22 6 жыл бұрын
Why do we need to remove the covariance in the data?
@sacman3001
@sacman3001 8 жыл бұрын
Awesome explanation! Thank you for posting!
@thinhphan5404
@thinhphan5404 5 жыл бұрын
Thank you. This video help me a lot.
@HyunukHa
@HyunukHa 3 жыл бұрын
Clear explanation.
@MiGotham
@MiGotham 4 жыл бұрын
Multiplication with the eigenvector doesn't necessarily have to be an integer multiplied with the eigenvector?! It could be any scalar?
@raditz2488
@raditz2488 3 жыл бұрын
@7:35 may be there is a typo and the eigen vectors are wrongly put in. The eigen vectors as per my calculations are [-0.85623911 -0.5165797 ] and [ 0.5165797 -0.85623911]. Can any one verify this?
@oldfairy
@oldfairy 4 жыл бұрын
Thank you, Great explanation. subscribed your channel after this video
@StefanReii
@StefanReii 4 жыл бұрын
Well explained, thank you!
@deepakjain4481
@deepakjain4481 5 ай бұрын
thanks a lot
@shourabhpayal1198
@shourabhpayal1198 3 жыл бұрын
Amazing sir
@domenicodifraia7338
@domenicodifraia7338 4 жыл бұрын
Great video man! Thanks a lot! : )
@the_iurlix
@the_iurlix 6 жыл бұрын
So clear!! Thanks man!
@ajeetis
@ajeetis 8 жыл бұрын
Nicely explained. Thank you!
@kamilazdybal
@kamilazdybal 6 жыл бұрын
Great video, thank you!
@TheGerakas
@TheGerakas 7 жыл бұрын
Your voice sounds like Tom Hanks!
@MrPorkered
@MrPorkered 6 жыл бұрын
more like iron man
@XarOOraX
@XarOOraX Жыл бұрын
This story seems straight forward - yet, after 8 minutes I still am clueless as where it is going to lead. Maybe it is just me, but when I need to learn something, I don't want a long tension arc: Oh, what is going to happen next... I want to start with a great picture of what is going to happen, and then fill in the details one after another, so I can sit and marvel, how the big initial problem step by step dissolves into smaller and understandable pieces. Inversing the story, starting from the conclusion, going to the basics also allows to stop once you understood enough.
@thuongdinh5990
@thuongdinh5990 8 жыл бұрын
awesome job ,thank you!
@KayYesYouTuber
@KayYesYouTuber 5 жыл бұрын
Beautiful explanation. thank you
@danspeed93
@danspeed93 4 жыл бұрын
Thanks clear
@bettys7298
@bettys7298 5 жыл бұрын
Hi Matthew, I do have a problem when using R to compute it. Could you help me fixing the problem? Thank you so much in advance! Here's the error and how I tried to fix it but failed: 1. the error: > mahal = mahalanobis(x, + colMeans(x) + cov(x, use="pairwise.complete.obs")) Error: unexpected symbol in: " colMeans(x) cov" 2. the fix: is.array(nomiss[, -c(1,2)]) (----->result= False) x
@lydiakoutrouditsou8514
@lydiakoutrouditsou8514 5 жыл бұрын
you've created an object called temArray, and then tried to run the analysis on an object called temPArray?
@1982Dibya
@1982Dibya 8 жыл бұрын
Could you please explain how Mahalanobis distance is related to Eigen vector.The video is very good and helpful but if you could explain how to use it from Eigen vector
@MatthewEClapham
@MatthewEClapham 8 жыл бұрын
The eigen vector is a direction. Essentially, the points are rescaled by compressing them in the eigenvector directions, but by different amounts along each eigenvector. This removes covariance in the data. That's basically what the Mahalanobis distance does.
@muratcan__22
@muratcan__22 6 жыл бұрын
@@MatthewEClapham Why do we need to remove the covariance in the data in the first place?
@bhupensinha3767
@bhupensinha3767 5 жыл бұрын
@@muratcan__22 : Hope you have the answer by now !!!
@cesarvillalobos1778
@cesarvillalobos1778 5 жыл бұрын
@@muratcan__22 The Euclidean distance problem.
@cesarvillalobos1778
@cesarvillalobos1778 5 жыл бұрын
@@muratcan__22 Going a little in deep: The covariance is a property of random variables, but for use Euclidean distance you have a set of points with its positions and the distance between them namely you dont have random variables, so doesnt make sense talk about covariance. The trick is: random variables.
@deepakkumarshukla
@deepakkumarshukla 5 жыл бұрын
Best!
@achillesarmstrong9639
@achillesarmstrong9639 6 жыл бұрын
nice video
@linkeris7994
@linkeris7994 6 жыл бұрын
very useful!
@康文耀-r5v
@康文耀-r5v 8 жыл бұрын
thank you!
Mahalanobis Distance - intuitive understanding through graphs and tables
10:27
Gopal Prasad Malakar
Рет қаралды 70 М.
風船をキャッチしろ!🎈 Balloon catch Challenges
00:57
はじめしゃちょー(hajime)
Рет қаралды 44 МЛН
小丑揭穿坏人的阴谋 #小丑 #天使 #shorts
00:35
好人小丑
Рет қаралды 46 МЛН
Principal Component Analysis (PCA)
26:34
Serrano.Academy
Рет қаралды 413 М.
Identifying Multivariate Outliers with Mahalanobis Distance in SPSS
8:24
The Covariance Matrix : Data Science Basics
11:00
ritvikmath
Рет қаралды 237 М.
What are Mahalanobis Distances
4:58
Aric LaBarr
Рет қаралды 16 М.
29: Non-Metric Multidimensional Scaling (NMDS)
12:05
Matthew E. Clapham
Рет қаралды 85 М.
Linear mixed effects models
18:37
Matthew E. Clapham
Рет қаралды 229 М.
16. Matching Methods
1:23:17
Gary King
Рет қаралды 23 М.
26: Resampling methods (bootstrapping)
9:40
Matthew E. Clapham
Рет қаралды 147 М.
Explaining linear regression
15:30
Very Normal
Рет қаралды 16 М.
LEARNING MAHALANOBIS DISTANCE FROM C.R RAO's
47:13
Indian Academy of Sciences
Рет қаралды 7 М.
風船をキャッチしろ!🎈 Balloon catch Challenges
00:57
はじめしゃちょー(hajime)
Рет қаралды 44 МЛН